paper_id
stringlengths 12
48
| title
stringlengths 12
155
| url
stringlengths 39
46
| abstract
stringlengths 389
2.11k
| ocr_markdown
stringlengths 18.1k
576k
|
---|---|---|---|---|
cheng-etal-2023-decouple | Decouple knowledge from paramters for plug-and-play language modeling | https://aclanthology.org/2023.findings-acl.901 | Pre-trained language models (PLM) have made impressive results in a wide range of NLP tasks and it has been revealed that one of the key factors to their success is the parameters of these models implicitly learn various types of knowledge in the pre-training corpus. However, encoding knowledge implicitly in the model parameters has two fundamental drawbacks. First, the knowledge is neither editable nor scalable once the model is trained, which is especially problematic in that knowledge is consistently evolving. Second, it lacks interpretability and prevents us from understanding what kind of knowledge PLM needs to solve a certain task. In this paper, we introduce {pasted macro {`}MODEL{'}}, a pre-training model with differentiable plug-in memory (DPM). The key intuition behind is to decouple the knowledge storage from model parameters with an editable and scalable key-value memory and leverage knowledge in an explainable manner by knowledge retrieval in the {pasted macro {`}MEMORY{'}}. We conduct extensive experiments under various settings to justify this design choice. In domain adaptation setting, {pasted macro {`}MODEL{'}} could be easily adapted to different domains with pluggable in-domain memory{---}obtaining 3.95 F1 improvements across four domains, without any in-domain training. {pasted macro {`}MODEL{'}} could also keep absorbing new knowledge after pre-training is done by knowledge updating operation in the {pasted macro {`}MEMORY{'}} without re-training. Finally, we show that by incorporating training samples into {pasted macro {`}MEMORY{'}} with knowledge prompting, {pasted macro {`}MODEL{'}} could further be improved by the instruction of in-task knowledge. |
## Decouple Knowledge From Paramters For Plug-And-Play Language Modeling
Xin Cheng 1, Yankai Lin 2,6, Xiuying Chen 3**, Dongyan Zhao** 1,4,5∗
, Rui Yan 2,6∗
1 Wangxuan Institute of Computer Technology, Peking University 2 Gaoling School of Artificial Intelligence, Renmin University of China 3 Computational Bioscience Reseach Center, KAUST 4 BIGAI, Beijing, China 5 National Key Laboratory of General Artificial Intelligence 6 Engineering Research Center of Next-Generation Intelligent Search and Recommendation, Ministry of Education [email protected] [email protected] [email protected] [email protected] [email protected]
## Abstract
Pre-trained language models (PLM) have made impressive results in various NLP tasks. It has been revealed that one of the key factors to their success is the parameters of these models implicitly learn all kinds of knowledge during pre-training. However, encoding knowledge implicitly in the model parameters has two fundamental drawbacks. First, the knowledge is neither editable nor scalable once the model is trained, which is especially problematic in that knowledge is consistently evolving. Second, it lacks interpretability and prevents humans from understanding which knowledge PLM requires for a certain problem. In this paper, we introduce PlugLM, a pre-training model with differentiable plug-in memory (DPM). The key intuition is to decouple the knowledge storage from model parameters with an editable and scalable key-value memory and leverage knowledge in an explainable manner by knowledge retrieval in the DPM. To justify this design choice, we conduct evaluations in three settings including: (1) domain adaptation. PlugLM obtains 3.95 F1 improvements across four domains on average without any in-domain pre-training.
(2) knowledge update. PlugLM could absorb new knowledge in a training-free way after pretraining is done. (3) in-task knowledge learning. PlugLM could be further improved by incorporating training samples into DPM with knowledge prompting1.
## 1 Introduction
Large pre-trained language models (PLM) (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2018) have become a revolutionary breakthrough in NLP area. Optimized by carefully designed self-supervised objectives on unlabeled corpus and
∗Corresponding author.
1Code available at https://github.com/Hannibal046/
PlugLM
fine-tuned on downstream tasks, PLMs perform remarkably well in a wide range of NLP benchmarks.
Recent studies (Warstadt et al., 2019; Petroni et al.,
2019) have revealed that one of the key factors to the success of PLMs is that the parameters of these models implicitly learn various types of knowledge in the pre-training corpus. Owing to these learned syntactic, semantic, factual and commonsense knowledge, PLMs show great understanding, generalization and reasoning abilities in multiple downstream tasks (Rogers et al., 2020; Izacard et al., 2022). As Geva et al. (2021) pointed out, the feed-forward layers (FFN), constituting two-thirds of a transformer model's parameters, are essentially key-value memories and store all kinds of knowledge of PLM. The first linear layer of FFN
acts like a set of sparsely activated keys detecting input patterns while the second is the corresponding value. To aggressively capture more knowledge, larger PLMs are continuously proposed, from 110M BERT (Devlin et al., 2019) to 530B MTNLG (Smith et al., 2022), yet PLM has not reached upper bound (Ouyang et al., 2022).
However, a fundamental question still remains:
For PLM, is it the optimal way to implicitly encode knowledge in its parameters? We argue that the implicit knowledge encoding approach has two fundamental drawbacks. First, the learned knowledge is neither editable nor scalable once the model is trained (e.g., BERT doesn't know what is a BERT). Nevertheless, world knowledge is actually infinite and evolving. We thus would never expect an ever-large model to capture all the knowledge in its parameters and to be continuously retrained for the newly coming one. Second, the current PLMs lack interpretability at the knowledge level. Implicit knowledge encoding fails to provide provenance for model's prediction and makes PLM
a black box preventing humans from understanding which knowledge PLM requires for a certain problem.
In this work, we propose a novel architecture of PLM, PlugLM, which decouples the knowledge storage from model parameters and explicitly leverages the knowledge in an explainable manner. As shown in Figure 1, we balance the functionality of FFN layer with a differentiable plug-in key-value memory (DPM), which is highly scalable as well as editable. Each slot of DPM encodes the knowledge to a pair of key and value, and thus we can explicitly retrieve the required knowledge in natural language from DPM rather than unnamed vectors in FFN.
To justify the design choice of decoupling the knowledge from parameters, we conduct extensive evaluations under different settings. In the domain adaptation setting, PlugLM could be easily adapted to different domains with pluggable indomain memory—obtaining 3.95 F1 improvements across four domains on average and up to 11.55 F1 improvement on ACL-ARC citation intent classification dataset, without any in-domain pre-training.
In the knowledge update setting, PlugLM could absorb new knowledge after pre-training is done in a training-free way by knowledge updating operation in the DPM, with an improvement up to 4 F1 scores in LINNAEUS NER dataset. PlugLM could further be improved by incorporating training samples into DPM with knowledge prompting as a kind of in-task knowledge.
## 2 Related Work
Investigating FFN Feed-forward layers constitute two-thirds of a transformer model's parameters and are essential to unveil modern PLMs (Geva et al., 2021, 2022). A surge of works have investigated the knowledge captured by FFN (Dai et al.,
2022a; Meng et al., 2022; Geva et al., 2021, 2022; Jiang et al., 2020; Yao et al., 2022; Wallat et al.,
2021). Based on the view that FFN is essentially an unnormalized key-value memory network, Dai et al. (2022a) detects knowledge neurons in FFN
and edit specific factual knowledge without finetuning. Meng et al. (2022) modifies FFN weights to update specific factual associations using RankOne Model Editing. Yao et al. (2022) injects knowledge into the FFN via BM25. Dai et al. (2022b)
and Lample et al. (2019) enhance the model by expanding the size of FFN with extra trainable keys and values.
Knowledge-Augmented Language Model There are two lines of works to equip PLM with knowledge. The first is introduce additional Knowledge Graph (KG) and knowledge-based training signal (e.g., entity linking) into the language model pre-training, like ERNIE (Zhang et al., 2019; Sun et al., 2019), KnowBERT (Peters et al., 2019) and KEPLER (Wang et al., 2021).
Another line of works adopt retrieval mechanism to incorporate knowledge, either symbolic (Verga et al., 2020; Agarwal et al., 2021; Févry et al.,
2020) or texual (Guu et al., 2020; Lewis et al.,
2020c; Borgeaud et al., 2022; Lewis et al.,
2020a; Verga et al., 2020; de Jong et al., 2022).
They formulate the task as retrieve then predict process by using extra neural dense retriever or sparse retriever to find most relevant supporting knowledge and combine it with input using either concatenation (Guu et al., 2020; Lewis et al.,
2020c), attention methods (de Jong et al., 2022; Chen et al., 2022) or interpolation (Khandelwal et al., 2020; Zhong et al., 2022)
PlugLM differs from previous works in that we do not try to equip the model with additional knowledge to perform knowledge-intensive tasks. The key insight is to transform FFN architecture into deep retrieval in the interest of decoupling the knowledge which would otherwise be stored in the parameters and this is orthogonal to all retrievalaugmented PLMs.
## 3 Preliminary
Feed-forward Layers Transformer (Vaswani et al., 2017), the backbone for all PLMs, is made of stacked self-attention (Self-Attn) and feedforward (FFN) layers. The former captures the contextual interaction among inputs and the latter process each input independently. Let x ∈ R
d1 be a vector as input, the FFN could be formulated as:
$$x)=\sigma(x\cdot\mathbf{v}$$
FFN(x) = σ(x · W⊤
1) · W2 (1)
where W1,W2 ∈ R
d2×d1 and σ is the activation function. The bias term is omitted for brevity.
Key-Value Memory Network The Key-Value Memory Network (Weston et al., 2014; Sukhbaatar et al., 2015) corresponds to d2 key-value pairs and each key/value is a vector in R
d1. They are the generalization of the way knowledge is stored (Eric et al., 2017; Miller et al., 2016). For an input x ∈
R
d1, there are two stages for a key-value memory
![2_image_0.png](2_image_0.png)
network. First, the lookup (addressing) stage would compute the matching degree between x and each key. In the second stage, x would be transformed by the weighted sum of values according to the distribution of the matching degree in the first stage.
We can formally define it as:
$$\mathbf{x}(x\cdot\mathbf{K}^{\mathsf{T}})\cdot\mathbf{V}$$
$${\mathrm{e}})={\mathrm{s}}0$$
fi MemoryNetwork(x) = softmax(x · K⊤)· V (2)
where K, V ∈ R
d2×d1. Comparing equation (1)
and (2), we could find that the FFN is an unnormalized version of MemoryNetwork. The keys in FFN are pattern detectors and would be activated only when certain patterns occur in the input. This explains how FFN stores knowledge in a key-value manner (Geva et al., 2021; Sukhbaatar et al., 2019).
## 4 Pluglm
The overall architecture of PlugLM is illustrated in Figure 1. Because FFN is essentially a key-value memory network (Geva et al., 2021; Dai et al.,
2022a; Meng et al., 2022), PlugLM creatively decouples the knowledge storage from model parameters by replacing2 FFN with a Differential Plug-in key-value Memory, DPM (§4.1) and conducting knowledge retrieval in DPM with knowledge attention (§4.2) for explicit knowledge usage instead of storing all knowledge implicitly in the model parameters. In §4.3, we detailedly explain how PlugLM is trained in both pre-training and finetuning stages.
2Because different layers in transformer capture different knowledge, the lower layer for shallow patterns while the upper layers for more semantic ones (Geva et al., 2021; ?),
we only consider replacing FFN in Top-L layers with DPM while keeping FFN in the lower layers untouched to encode the intrinsic language understanding knowledge as detailed in
§5.4.
## 4.1 Differential Plug-In Memory
In this paper, we view n-th knowledge dn =
{t 1n, t2n*, ..., t*|dn| n } as consecutive tokens from unlabeled corpora as in Guu et al. (2020). For each dn, we get its dense representation hn from a knowledge encoder KnowEncoder(·):
$h_{n}=$ AttnPooling(E${}_{\rm Token}(d_{n})+{\rm E}_{\rm Pos}(d_{n}))$ (3)
where AttentivePooling function (Xu et al., 2021; Cheng et al., 2023a) corresponds to a trainable pattern detector aggregating information from a sequence of input. And EToken and EPos denote token embedding and positional embedding. Then we use two independent mapping functions to project hn to the key space and value space:
$$\begin{array}{l}{{k_{n}=\mathbf{W_{k}}\cdot h_{n}+\mathbf{b_{k}}}}\\ {{v_{n}=\mathbf{W_{v}}\cdot h_{n}+\mathbf{v_{k}}}}\end{array}$$
$$\begin{array}{l}{(4)}\\ {(5)}\end{array}$$
where Wk, Wv, bk and vk are trainable parameters. And DPM is a triplet of ⟨D, K, V⟩:
$$\mathbb{D}=\left\{d_{1},d_{2},...,d_{|\mathbb{D}|}\right\}$$ $$\mathbb{K}=\left\{k_{1},k_{2},...,k_{|\mathbb{D}|}\right\}$$ $$\mathbb{V}=\left\{v_{1},v_{2},...,v_{|\mathbb{D}|}\right\}$$
(6) (7) (8) (1) $\frac{1}{2}$ (2) $\frac{1}{2}$ (3) $\frac{1}{2}$ (4) $\frac{1}{2}$ (5) $\frac{1}{2}$ (6) $\frac{1}{2}$ (7) $\frac{1}{2}$ (8) $\frac{1}{2}$ (9) $\frac{1}{2}$ (10) $\frac{1}{2}$ (11) $\frac{1}{2}$ (12) $\frac{1}{2}$ (13) $\frac{1}{2}$ (14) $\frac{1}{2}$ (15) $\frac{1}{2}$ (16) $\frac{1}{2}$ (17) $\frac{1}{2}$ (18) $\frac{1}{2}$
## Memory Fusion $$\newcommand{\vecs}[1]{\overset{\rightharpoonup}{\mathbf{#1}}}$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}}$$
4.2 Memory Fusion
For hidden states h ∈ R
l×dfrom Self-Attn, FFN
would transform h with unnormalized key-value memory as in Equation (1). Our key insight is that instead of interacting with unnamed vectors in FFN, we conduct Maximum Inner Product Search (MIPS) to retrieve knowledge in natural language from ⟨D, K, V⟩ where each triplet corresponds to one knowledge along with its key and value representation. For h, we first get its sentencelevel representation z by an attentive pooling function z = AttentivePooling(h), then we use z as the query vector to ⟨D, K, V⟩. Since PLM is internally sparse (Li et al., 2022), we only consider Top-N
knowledge Dz with corresponding keys Kz and values Vz:
$$\begin{array}{l}{{\mathbb{K}_{z}=\mathrm{Top-N}(\mathrm{MIPS}(z,\mathbb{K}))}}\\ {{\mathbb{V}_{z}=\{v_{i}\mathrm{~if~}k_{i}\mathrm{~in~}\mathbb{K}_{z}\}}}\\ {{\mathbb{D}_{z}=\{d_{i}\mathrm{~if~}k_{i}\mathrm{~in~}\mathbb{K}_{z}\}}}\end{array}$$
where Top-N also corresponds to the indexing operation. With Kz and Vz, we use knowledge attention to fuse retrieved knowledge into our model:
Attention($h$,$\mathbb{K}_{z}$,$\mathbb{V}_{z}$) = softmax($\frac{h\mathbb{K}_{z}}{\sqrt{d}}$)$\mathbb{V}_{z}$
)Vz (12)
where d is the head dimension. By knowledge retrieval and fusion, we explore an interpretable way to incorporate knowledge into the model where Dz is the actual knowledge that PLM would leverage. And direct modification on D without changing model parameters empowers PlugLM with much flexibility and scalability in domain adaptation (§5.1) and knowledge update (§5.2) scenarios.
## 4.3 Training
The backbone of our model is a multi-layer bidirectional transformer encoder (Devlin et al., 2019).
There are two phases in our framework: pretraining and fine-tuning. In the pre-training phase, to make the whole training process end-to-end trainable, we use asynchronous index refreshing to optimize our model as done in Guu et al. (2020) and Cai et al. (2021). Concretely, we update the indices of DPM every T steps. The MIPS results are based on the stale index while the scores of selected TopN results are recomputed using KnowEncoder(·)
which facilitates the gradient flow back to memory.
The training objective is Masked Language Modeling (Devlin et al., 2019) where we randomly mask tokens in a sentence and ask PlugLM to predict it.
In the pre-training phase, Wikipedia is chosen as the source of knowledge and in the domain adaptation fine-tuning stage, corpora from other domains are treated as knowledge sources detailed in §5.1.
More details are shown in Appendix A. In the finetuning phase, the K and V of DPM are fixed, and we view it as an editable and scalable knowledge lookup table.
## 5 Experiments
$$\begin{array}{c}{{(9)}}\\ {{(10)}}\\ {{(11)}}\end{array}$$
PlugLM mainly tries to decouple the knowledge storage from parameters and leverage knowledge in an explainable way. We conduct comprehensive experiments to show the superiority of this novel architecture: we could easily adapt the model to different domains without in-domain pre-training by switching DPM (§5.1.1 and §5.1.2), alleviate catastrophic forgetting by storing DPM (§5.1.1),
inject new knowledge into the model by enlarging DPM (§5.2), further enhance the model by injecting in-task knowledge into DPM (§5.3) and unveil the black-box PLM with direct access to the knowledge retrieved from DPM (Appendix D). We also carefully examine each key design in PlugLM and point the direction for future work in §5.4.
$$(12)$$
## 5.1 Domain Adaptation
Learning robust and transferable representation has been the core of language model pre-training (Peters et al., 2019). For the general-purposed PLM
to generalize well on domain-specific tasks, endowing the model with domain knowledge via indomain training remains the go-to approach (Gururangan et al., 2020; Whang et al., 2020; Zhang et al.,
2020; Li et al., 2023). In this section, we show that without any in-domain pre-training, PlugLM could flexibly adapt to multiple domains with domainspecific DPM. For the existing PLM encoding knowledge in parameters, this is a challenging task in that it can not guarantee the generalization across multiple domains due to catastrophic forgetting (Kirkpatrick et al., 2016) and sometimes it is even computationally unaffordable to keep training the super large models (Smith et al., 2022; Brown et al., 2020).
We consider two adaptation scenarios: domain adaptive post-training (§5.1.1) and in-domain pretraining (§5.1.2). The former is conducted after PLM was trained on the general domain and the latter trains a domain-specific PLM from scratch.
## 5.1.1 Domain Adaptive Post-Training
Experimental Setting Following Gururangan et al. (2020), we conduct experiments on four domains: BIOMED, CS, NEWS and REVIEWS across eight domain-specific downstream tasks, in both low and high resource settings. More details can be found in Appendix B. When fine-tuning, we pass the final [CLS] representation to a task-specific head as in Devlin et al. (2019).
| Model | BIOMED | CS | NEWS | REVIEWS | | | | | | |
|----------|----------|-------|--------|-----------|-------|-------|-------|-------|-------|--------|
| CHEM. | RCT | ACL. | SCI. | HYP. | AG. | HP. | IMDB | Avg. | Avg. | |
| Gain | Cost | | | | | | | | | |
| WikiBERT | 77.72 | 86.52 | 61.58 | 79.95 | 83.54 | 93.38 | 67.62 | 89.79 | - | - |
| + DAPT | 78.24 | 86.71 | 67.56 | 80.82 | 86.22 | 93.49 | 68.11 | 90.12 | +1.40 | 47.7 h |
| ¬ DAPT | 75.82 | 86.11 | 62.11 | 78.42 | 80.12 | 93.31 | 68.11 | 89.54 | -0.82 | - |
| + DACT | 76.34 | 86.11 | 61.19 | 78.56 | 80.52 | 93.29 | 68.08 | 89.88 | -0.77 | - |
| REALM | 78.28 | 85.12 | 62.07 | 78.41 | 84.12 | 92.58 | 67.06 | 90.56 | - | - |
| + DAA | 79.32 | 85.98 | 68.92 | 80.41 | 85.36 | 92.61 | 68.51 | 93.01 | +1.98 | 6.3 h |
| ¬ DAA | 77.61 | 85.12 | 64.78 | 75.31 | 82.28 | 92.41 | 66.13 | 91.21 | -0.41 | - |
| + DAR | 80.56 | 85.32 | 70.12 | 81.16 | 86.58 | 93.01 | 67.42 | 92.16 | +2.26 | 6.3 h |
| PlugLM | 78.02 | 87.12 | 63.77 | 78.56 | 84.32 | 93.23 | 67.83 | 91.24 | - | - |
| + DAA | 82.56 | 88.13 | 72.51 | 83.00 | 88.16 | 94.11 | 69.28 | 92.56 | +3.28 | 0.16 h |
| ¬ DAA | 77.98 | 86.13 | 64.78 | 78.13 | 84.18 | 92.99 | 67.56 | 90.88 | -0.18 | - |
| + DAR | 83.80 | 88.98 | 75.32 | 82.56 | 89.26 | 93.55 | 69.41 | 92.78 | +3.95 | 0.16 h |
We have the following baselines: **WikiBERT**
uses the architecture of BERT*base* (Devlin et al.,
2019) and is pre-trained on Wikipedia. To adapt WikiBERT to other domains, we use DAPT following the training setting in Gururangan et al. (2019).
REALM (Guu et al., 2020) and **PlugLM** are models that have an external knowledge base and can be simply adapted to other domains with a different base. We have two adaptation strategies: DAA,
short for Domain Adaptive Addition, appends domain knowledge to the knowledge base, and DAR,
Domain Adaptive Replacement, replaces general knowledge with domain-specific knowledge in the knowledge base.
We also include the results of ¬DAPT, ¬DAA
and DACT. The former two use irrelevant domain corpora for post-training and knowledge base construction, which are used to test the robustness of the adaptation method and rule out the factor that improvements might be attributed simply to exposure to more data3. For DACT, Domain Adaptive Continual Training, we sequentially use DAPT for WikiBERT in multiple domains in the hope that it can capture and store knowledge from various domains in a lifelong learning way (Rostami, 2021).
Experimental Results The results are shown in Table 1. The Avg.Cost is the cost for adaptation measured by hour. For WikiBERT, it's the time to post-train model in domain-specific corpus. For REALM and PlugLM, it is the time to encode domain knowledge into the knowledge base. We can observe: (1) In-domain training helps model better generalize to tasks requiring domain knowledge while irrelevant knowledge misleads the model and causes performance degradation. And by comparing ¬DAPT and ¬DAA, it shows that models with external knowledge base (PlugLM and REALM) are more robust when faced with noisy out-of-domain knowledge. (2) For the model that implicitly encodes knowledge in the parameters, it fails to generalize across domains as the result of DACT indicates. For example, we keep training WikiBERT in NEWS domain after DAPT in CS domain and fine-tune it on the CS downstream tasks. It performs on par with model that is never exposed to CS domain (¬DAPT). PlugLM could alleviate this catastrophic forgetting problem by storing all kinds of knowledge in DPM and using it in a plug-and-play manner. (3) Direct modification on external memory helps PlugLM efficiently and effectively adapt to different domains without in-domain training. In 254× less time compared with DAPT and in 40× less time compared with REALM, PlugLM significantly outperforms DAPT and REALM-based methods.
![5_image_0.png](5_image_0.png)
To further understand PlugLM, in Figure 2, we present a visualization for the distribution of actual retrieved knowledge for DAA, DAR and original PlugLM. A clear pattern here is that with more domain knowledge involved, the model performs better (63.77, 72.51 and 75.32) and remarkably, although pre-trained on the general domain, the PlugLM has managed to learn what to retrieve when there are both general knowledge and domainspecific knowledge in DPM shown in DAA visualization.
## 5.1.2 In-Domain Pre-Training
In-domain pre-training is another line of work for domain-specific PLM training from scratch like BioBERT (Lee et al., 2019), SciBERT (Beltagy et al., 2019) and FinBERT (Araci, 2019).
Experimental Setting In this section, we choose the biomedical domain and compare PlugLM with model in the architecture of BERT*base*, pre-trained on the general domain, Wikipedia (i.e., WikiBERT) and pre-trained on the biomedical domain, Pubmed (i.e., PubmedBERT). The statistics of datasets and pre-training details are listed in Appendix F. We test two kinds of abilities of these PLMs. First, we test how they perform in biomedrelevant downstream tasks. Specifically, we conduct experiments on eight representative biomedical NER datasets which aim at recognizing domainspecific proper nouns in the biomedical corpus.
Then we test their general language understanding ability in GLUE (Wang et al., 2019) and SQUAD (Rajpurkar et al., 2016, 2018). For SQUAD and GLUE,
the DPM is constructed from Wikipedia, and for biomedical NER, DPM is from PubMed (Canese and Weis, 2013).
Experimental Results The results are shown in Table 3. Both pre-trained on the Wikipedia, PlugLM outperforms WikiBERT in 8/8 NER tasks with average 1.75 F1 scores by simply switching the knowledge domain of DPM. PlugLM also gives comparable results with PubmedBERT in BC4CHEMD, JNLPBA and LINNAEUS datasets. Although PubmedBERT works well for biomedical tasks, it shows less general language understanding ability and underperforms WikiBERT and PlugLM
in GLUE (Table 4) and SQUAD (Table 2), especially in low resource scenario (i.e., RTE, COLA and MRPC
datasets). With DPM, PlugLM shows great flexibility and performs well in both general domain and biomedical domain. In Appendix D, we give concrete cases of PlugLM with respect to the retrieved knowledge.
| PubmedBERT | WikiBERT | PlugLM | | | | |
|--------------|------------|----------|-------|-------|-------|-------|
| EM | F1 | EM | F1 | EM | F1 | |
| SQUAD(v1) | 76.68 | 84.56 | 81.32 | 88.68 | 82.19 | 89.44 |
| SQUAD(v2) | 68.44 | 71.12 | 72.64 | 75.89 | 73.76 | 76.90 |
## 5.2 Knowledge Update
Since the world is not fixed as a snapshot once the pre-training corpus is collected, the current PLM,
no matter how large it is, fails to adapt to this changing world. For colossal PLMs like GPT-3 (Brown et al., 2020) and MT-NLG (Smith et al., 2022), efficiently fine-tuning for downstream tasks remains an open challenge, let alone re-training it on the newly coming knowledge.
Experimental Setting In this section, we show that PlugLM can efficiently absorb new knowledge by updating the ⟨D, K, V⟩ without re-training. We 14293
| Type | Dataset | # Annotation | WikiBERT | PlugLM | PubmedBERT |
|---------------|--------------|----------------|------------|----------|--------------|
| Disease | NCBI-disease | 6811 | 83.65 | 85.96 | 88.39 |
| BC5CDR | 12694 | 80.37 | 82.10 | 83.89 | |
| Drug/Chem. | BC4CHEMD | 79842 | 87.07 | 89.93 | 89.35 |
| BC5CDR | 15411 | 88.79 | 90.56 | 92.75 | |
| Gene/Protein. | B2CGM | 20703 | 80.63 | 82.14 | 83.16 |
| JNLPBA | 35460 | 75.49 | 76.39 | 76.25 | |
| Species | LINNAEUS | 4077 | 85.32 | 87.01 | 86.11 |
| SPECIES-800 | 3708 | 68.54 | 69.73 | 71.32 | |
Table 3: Performance of biomedical NER measured by F1 score across eight datasets.
| #Paras | Avg. | | | | | | | | | |
|------------|--------|-------|-------|-------|-------|-------|-------|--------------|-------|-------------|
| Latency | RTE | COLA | MRPC | STS-B | SST-2 | QNLI | QQP | MNLI -(m/mm) | | |
| PubmedBERT | 110M | ×1.00 | 61.17 | 50.06 | 84.56 | 85.73 | 88.64 | 90.11 | 88.78 | 82.14/82.56 |
| WikiBERT | 110M | ×1.00 | 65.70 | 53.53 | 88.85 | 88.64 | 92.32 | 90.66 | 89.71 | 83.91/84.10 |
| PlugLM | 109M | ×2.54 | 70.40 | 52.68 | 91.54 | 89.20 | 91.86 | 91.28 | 90.56 | 84.56/85.35 |
Table 4: GLUE results. Detailed metrics and latency of each model is in Appendix C
consider the following two settings. (1) We only pre-train PlugLM with limited data and gradually enlarge the DPM with unseen knowledge when fine-tuning. (2) We pre-train PlugLM with full general-domain data and ask the model to perform domain adaptation in DAR manner by gradually increasing domain knowledge in ⟨D, K, V⟩.
Experimental Results The results are shown in Figure 3a and 3b. For the first setting, we test on QA (SQUAD) and Sentiment Classification tasks (SST-2). Both WikiBERT and PlugLM
are pre-trained with only 1/4 Wikipedia corpus.
We have the following observations: (1) PlugLM
trained with limited data already outperforms WikiBERT in both tasks (0.39 EM in QA and 0.59 Accuracy in classification) which verifies the effectiveness of PlugLM in low-resource setting; (2) A
consistent pattern across two tasks verifies PlugLM
could absorb new knowledge simply by adding more slots in ⟨D, K, V⟩ without heavy re-training.
For the second setting, Figure 3c shows our model can absorb new cross-domain knowledge under adaptation setting. It achieves a higher F1 score on the LINNAEUS NER dataset with increasingly more biomed-specific knowledge injected.
## 5.3 In-Task Knowledge
Inspired by in-context learning (Brown et al., 2020)
and example-augmented generation (Cheng et al., 2022, 2023b), the training samples can also be viewed as a kind of in-task knowledge. In this section, we broaden the scope of DPM knowledge by including the training samples.
Experimental Setting Since the knowledge from Wikipedia is a textual description from domain experts while the training sample from a Questionanswering NLI dataset is in the form of [Q, A,
Label], this surface form distribution shift may affect the knowledge retrieval. We consider the following injection methods. (1) Concate. We directly concatenate each training sample as a long string in the form of "Q [SEP] A [SEP] Label" and append this to DPM. (2) Tagged. To build the connection between model inputs and DPM, we tag each training sample by prepending a special token ([Tagged]), and use these tagged samples in both DPM and as model input. (3) Knowledge Prompting. Inspired by prompting method (Liu et al., 2021; Schick and Schütze, 2021), we transfer in-task knowledge to knowledge in the form of Wikipedia by a natural language prompting. For example, in QNLI dataset, we transform [Q, A, Label]
with the following prompting: "The first sentence
(doesn't) entail(s) with the second. The first sentence is [Q] and the second is [A]". We choose moderate-sized QNLI and QQP tasks because in-task knowledge injection doesn't apply to low-resource setting in our preliminary experiments.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
Experimental Results The result is shown in Table 5. We can observe that PlugLM has managed to learn from in-task knowledge and the surfaceform of knowledge affect the model performance.
Concatenation of training sample fails to inform PlugLM the actual in-task knowledge (zero retrieval in QNLI) and building connection between data and knowledge by a special tagged token only gives minor improvements. Instead, a welldesigned knowledge prompting can help PlugLM
learn task-specific knowledge.
| Task | Ori. | Concate. | Tagged. | Prompting. |
|--------|--------|------------|-----------|--------------|
| QNLI | 91.28 | 91.28 | 91.37 | 91.58 |
| QQP | 90.56 | 90.12 | 90.76 | 91.47 |
## 5.4 Tuning Pluglm
We investigate how each key design affects the performance of PlugLM. (1) **Number of Retrieved**
Knowledge. Figure 4 shows the effects of different N in STS-B dataset and the sparsely activated Top-5 knowledge proves to be optimal. (2) **Layers equipped with DPM.** Considering that the upper layers in PLM capture more semantic information (Geva et al., 2021), we equip the last encoder layer with DPM in PlugLM. Figure 4 shows that increasing DPM-enhanced encoder layer gives minor improvements but brings much latency because of extra MIPS search. (3) **FFN and DPM.**
To further explore the relation between FFN and DPM, we propose two model variants. First, we replace FFN in all encoder layers with a shared DPM denoted as PlugLM All. Then we fuse FFN and DPM by modifying the model architecture from LayerNorm(h+KnowAttn(h, Kh′, Vh′))
to LayerNorm(h + KnowAttn(h, Kh′, Vh′) +
FFN(h)) and we name it PlugLM Fuse. The Spearman correlation (more results are shown in Appendix E) in STS-B dataset for WikiBERT,
PlugLM All, PlugLM and PlugLM Fuse is 88.64, 86.82, 89.20 and 89.10. We could find that PlugLM
All, where there is no FFN, underperforms WikiBERT. And PlugLM performs comparably with PlugLM Fuse. We conjecture that FFN in different layers may play different roles, which is also reported in Geva et al. (2021). For the upper layer which captures more semantic knowledge (Jawahar et al., 2019), DPM is a flexible and extensible substitution of FFN, but for lower layers, shallow features should be captured in the model parameters.
![7_image_3.png](7_image_3.png)
## 6 Conclusion
For the first time, we challenge the current implicit knowledge encoding mechanism for PLMs with two fundamental drawbacks and insightfully propose to decouple knowledge storage from model parameters with an editable and scalable key-value memory. Inspired by the findings that FFN stores all kinds of knowledge and is essentially a keyvalue memory network, we transform FFN architecture into deep retrieval with a differentiable plugin memory (DPM), which makes the knowledge encoding of PLMs more flexible and interpretable.
Extensive experimental results in different scenarios including domain adaptation, knowledge update and in-task knowledge learning verify the design choice of PlugLM. We believe this architectural design would pave a new direction for future research on PLM, especially for super-large PLM.
## Limitations
We discuss the limitations of PlugLM as follows:
(1) Despite the strong performance achieved by our approach with DPM, it results in a reduced inference efficiency at the same time due to the MIPS search. For example, PlugLM is about two times slower than pure transformer-based models in GLUE. This would be more crucial when the external memory is much larger. Potential solutions to this issue include (1) constructing the memory using a coarser granularity (Borgeaud et al., 2022);
(2) compressing DPM by semantic clustering as in Tay et al. (2022) or knowledge summarization as in Xu et al. (2022).
(2) In this paper, we choose Wikipedia for DPM
construction and PlugLM pre-training. While Wikipedia is the most commonly used data source for language model pre-training (Devlin et al.,
2019; Liu et al., 2019), there are also many other types of knowledge not covered in Wikipedia, and how to integrate different types of knowledge (e.g., factual, commonsense, syntactic and semantic knowledge) into our framework remains under-explored.
(3) Although this paper proposes a general architecture that is applicable to PLMs of all kinds and sizes including bidirectional (Devlin et al.,
2019; Liu et al., 2019; Yang et al., 2019), unidirectional (Radford et al., 2018, 2019; Brown et al.,
2020) and encoder-decoder-based PLM (Lewis et al., 2020b; Raffel et al., 2020; Song et al., 2019),
we only experiment with bidirectional models in moderate size. In particular, we believe this architectural design would be greatly beneficial for LLM (Smith et al., 2022; Chowdhery et al., 2022; Ouyang et al., 2022) for the following reasons: (1) the parameters of LLM could not be easily updated once the pre-training is done due to the unaffordable training cost. (2) the additional latency cost by MIPS retrieval is negligible compared with that of the whole LLM.
## Acknowledgement
This work was supported by National Natural Science Foundation of China (NSFC
Grant No. 62122089), the National Key Research and Development Program of China (No.
2021YFC3340304).
## References
Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 3554–3565. Association for Computational Linguistics.
Dogu Araci. 2019. Finbert: Financial sentiment analysis with pre-trained language models. *CoRR*,
abs/1908.10063.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3613–3618. Association for Computational Linguistics.
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre.
2022. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 2206–2240.
PMLR.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:*
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Deng Cai, Yan Wang, Huayang Li, Wai Lam, and Lemao Liu. 2021. Neural machine translation with monolingual translation memory. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 7307–7318, Online.
Association for Computational Linguistics.
Kathi Canese and Sarah Weis. 2013. Pubmed: the bibliographic database. *The NCBI handbook*, 2(1).
Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, and William W. Cohen. 2022. Augmenting pretrained language models with qa-memory for opendomain question answering. *CoRR*, abs/2204.04581.
Xin Cheng, Shen Gao, Lemao Liu, Dongyan Zhao, and Rui Yan. 2022. Neural machine translation with contrastive translation memories. *CoRR*,
abs/2212.03140.
Xin Cheng, Shen Gao, Yuchi Zhang, Yongliang Wang, Xiuying Chen, Mingzhe Li, Dongyan Zhao, and Rui Yan. 2023a. Towards personalized review summarization by modeling historical reviews from customer and product separately.
Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, and Rui Yan. 2023b. Lift yourself up: Retrieval-augmented text generation with self memory.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *CoRR*, abs/2204.02311.
Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022a. Knowledge neurons in pretrained transformers. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL
2022, Dublin, Ireland, May 22-27, 2022, pages 8493–
8502. Association for Computational Linguistics.
Damai Dai, Wenbin Jiang, Qingxiu Dong, Yajuan Lyu, Qiaoqiao She, and Zhifang Sui. 2022b. Neural knowledge bank for pretrained transformers. *CoRR*,
abs/2208.00399.
Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, and William W. Cohen. 2022. Mention memory: incorporating textual knowledge into transformers through entity mention attention. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Franck Dernoncourt and Ji Young Lee. 2017. Pubmed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017, Volume 2: Short Papers, pages 308–313. Asian Federation of Natural Language Processing.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Mihail Eric, Lakshmi Krishnan, François Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In *Proceedings* of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbrücken, Germany, August 15-17, 2017, pages 37–49. Association for Computational Linguistics.
Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. 2020. Entities as experts: Sparse memory access with entity supervision. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4937–4951. Association for Computational Linguistics.
Mor Geva, Avi Caciularu, Kevin Ro Wang, and Yoav Goldberg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. *CoRR*, abs/2203.14680.
Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are keyvalue memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 5484–5495. Association for Computational Linguistics.
Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational pretraining for semi-supervised text classification. In *Proceedings* of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5880–5894. Association for Computational Linguistics.
Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL
2020, Online, July 5-10, 2020, pages 8342–8360.
Association for Computational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine Learning Research*,
pages 3929–3938. PMLR.
Ruining He and Julian J. McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, WWW 2016, Montreal, Canada, April 11 - 15, 2016, pages 507–517. ACM.
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane DwivediYu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. *CoRR*, abs/2208.03299.
Ganesh Jawahar, Benoît Sagot, and Djamé Seddah.
2019. What does BERT learn about the structure of language? In *Proceedings of the 57th Conference of* the Association for Computational Linguistics, ACL
2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3651–3657. Association for Computational Linguistics.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know. *Trans. Assoc. Comput. Linguistics*,
8:423–438.
David Jurgens, Srijan Kumar, Raine Hoover, Daniel A.
McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames.
Trans. Assoc. Comput. Linguistics, 6:391–406.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In *8th International Conference on Learning* Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval2019 Task 4: Hyperpartisan news detection. In *SemEval*.
James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A.
Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell.
2016. Overcoming catastrophic forgetting in neural networks. *CoRR*, abs/1612.00796.
Jens Kringelum, Sonny Kim Kjærulff, Søren Brunak, Ole Lund, Tudor I. Oprea, and Olivier Taboureau.
2016. ChemProt-3.0: a global chemical biology diseases mapping. In *Database*.
Guillaume Lample, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2019. Large memory layers with product keys. In *Advances in Neural Information* Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8546–8557.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining.
CoRR, abs/1901.08746.
Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettlemoyer.
2020a. Pre-training via paraphrasing. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020b.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,*
ACL 2020, Online, July 5-10, 2020, pages 7871–7880.
Association for Computational Linguistics.
Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020c. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Jinpeng Li, Yingce Xia, Xin Cheng, Dongyan Zhao, and Rui Yan. 2023. Learning disentangled representation via domain adaptation for dialogue summarization.
In *Proceedings of the ACM Web Conference 2023*,
WWW '23, page 1693–1702, New York, NY, USA.
Association for Computing Machinery.
Zonglin Li, Chong You, Srinadh Bhojanapalli, Daliang Li, Ankit Singh Rawat, Sashank J. Reddi, Ke Ye, Felix X. Chern, Felix X. Yu, Ruiqi Guo, and Sanjiv Kumar. 2022. Large models are parsimonious learners: Activation sparsity in trained transformers.
CoRR, abs/2210.06313.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
CoRR, abs/2107.13586.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692.
Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel S. Weld. 2020. S2ORC: the semantic scholar open research corpus. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4969–4983. Association for Computational Linguistics.
Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -
November 4, 2018, pages 3219–3232. Association for Computational Linguistics.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 142–150. The Association for Computer Linguistics.
Julian J. McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, August 9-13, 2015, pages 43–52. ACM.
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associations in gpt. *arXiv preprint arXiv:2202.05262*.
Alexander H. Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston.
2016. Key-value memory networks for directly reading documents. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016*, pages 1400–1409. The Association for Computational Linguistics.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022.
Training language models to follow instructions with human feedback. *CoRR*, abs/2203.02155.
Matthew E. Peters, Mark Neumann, Robert L. Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 43–54. Association for Computational Linguistics.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2463–2473. Association for Computational Linguistics.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions
for squad. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,*
ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 784–789. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In *Proceedings* of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383–2392.
The Association for Computational Linguistics.
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
2020. A primer in bertology: What we know about how BERT works. *Trans. Assoc. Comput. Linguistics*, 8:842–866.
Mohammad Rostami. 2021. Lifelong domain adaptation via consolidated internal distribution. In *Advances in Neural Information Processing Systems 34:*
Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 11172–11183.
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 255–269. Association for Computational Linguistics.
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zheng, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro.
2022. Using deepspeed and megatron to train megatron-turing NLG 530b, A large-scale generative language model. *CoRR*, abs/2201.11990.
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In *Proceedings* of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pages 5926–5936. PMLR.
Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Hervé Jégou, and Armand Joulin. 2019.
Augmenting self-attention with persistent memory.
CoRR, abs/1907.01470.
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2440–2448.
Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: enhanced representation through knowledge integration. *CoRR*,
abs/1904.09223.
Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Prakash Gupta, Tal Schuster, William W.
Cohen, and Donald Metzler. 2022. Transformer memory as a differentiable search index. *CoRR*,
abs/2202.06991.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Pat Verga, Haitian Sun, Livio Baldini Soares, and William W. Cohen. 2020. Facts as experts: Adaptable and interpretable neural memory over symbolic knowledge. *CoRR*, abs/2007.00849.
Jonas Wallat, Jaspreet Singh, and Avishek Anand. 2021.
Bertnesia: Investigating the capture and forgetting of knowledge in BERT. *CoRR*, abs/2106.02902.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
OpenReview.net.
Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021.
KEPLER: A unified model for knowledge embedding and pre-trained language representation. Trans.
Assoc. Comput. Linguistics, 9:176–194.
Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investigating bert's knowledge of language: Five analysis methods with npis. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2877–
2887. Association for Computational Linguistics.
Jason Weston, Sumit Chopra, and Antoine Bordes. 2014.
Memory networks. *arXiv preprint arXiv:1410.3916*.
Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuiseok Lim. 2020. An effective domain adaptive post-training method for BERT in response selection. In *Interspeech 2020, 21st Annual*
Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pages 1585–1589. ISCA.
Tianyu Xu, Wen Hua, Jianfeng Qu, Zhixu Li, Jiajie Xu, An Liu, and Lei Zhao. 2022. Evidence-aware document-level relation extraction. In *Proceedings of* the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA,
October 17-21, 2022, pages 2311–2320. ACM.
Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, and Xuedong Huang. 2021. Fusing context into knowledge graph for commonsense question answering. In *Findings of the Association for* Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP
2021 of *Findings of ACL*, pages 1201–1207. Association for Computational Linguistics.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. In *Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019,*
NeurIPS 2019, December 8-14, 2019, Vancouver, BC,
Canada, pages 5754–5764.
Yunzhi Yao, Shaohan Huang, Li Dong, Furu Wei, Huajun Chen, and Ningyu Zhang. 2022. Kformer:
Knowledge injection in transformer feed-forward layers. In *Natural Language Processing and Chinese* Computing - 11th CCF International Conference, NLPCC 2022, Guilin, China, September 24-25, 2022, Proceedings, Part I, volume 13551 of Lecture Notes in Computer Science, pages 131–143. Springer.
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 9051–9062.
Rong Zhang, Revanth Gangi Reddy, Md Arafat Sultan, Vittorio Castelli, Anthony Ferritto, Radu Florian, Efsun Sarioglu Kayi, Salim Roukos, Avi Sil, and Todd Ward. 2020. Multi-stage pre-training for lowresource domain adaptation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5461–5468, Online. Association for Computational Linguistics.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12,*
2015, Montreal, Quebec, Canada, pages 649–657.
Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: enhanced language representation with informative entities. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1441–1451. Association for Computational Linguistics.
Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Training language models with memory augmentation.
CoRR, abs/2205.12674.
## A Pluglm Pretraining Details
The details of PlugLM pre-training is shown in Table 6
## F Details For Wikipedia And Pubmed
The source and size of Wikipedia and Pubmed are
shown in Table 11. And hyper-parameters for WikiBERT and PubmedBERT pre-training is shown in
Table 12.
| shown in Table 11. And hyper-parameters for WikiBERT and PubmedBERT pre-training is shown in Table 12. | | | |
|-----------------------------|---------------------|----------------|------------|
| Hyperparameter | Assignment | | |
| vocab size | 30522 | | |
| num layers with DPM | top-1 | | |
| top-N | 5 | | |
| number of layers | 12 | | |
| attention head | 12 | | |
| mlm masking | static | | |
| mlm masking rate | 0.15 | | |
| ffn size | 3072 | | |
| max knowledge length | 288 | | |
| Uncased | True | | |
| memory size | 14802866 | | |
| batch size | 64 | | |
| gradient accumulation steps | 128 | | |
| max train steps | 8000 | | |
| optimizer | FusedLAMBAMP | | |
| learning rate | 1e-4 | | |
| index refreshing step | 200 | | |
| learning rate scheduler | PolyWarmUpScheduler | | |
| Warmup proportion | 0.2843 | | |
| weight decay | 0.01 | Hyperparameter | Assignment |
| vocab size | 30522 | | |
| Uncased | True | | |
| number of Layers | 12 | | |
| attention Head | 12 | | |
| ffn Size | 3072 | | |
| mlm masking | static | | |
| batch size | 64 | | |
| gradient accumulation steps | 128 | | |
| max train steps | 8000 | | |
| optimizer | FusedLAMBAMP | | |
| learning rate | 6e-3 | | |
| index refreshing step | 200 | | |
| learning rate scheduler | PolyWarmUpScheduler | | |
| Warmup proportion | 0.2843 | | |
| weight decay | 0.01 | | |
| Table 12: Hyperparameters for WikiBERT and PubmedBERT pretraining. | | | |
Table 6: Hyperparameters for PlugLM pretraining.
## B Data For Domain Adaptive Post-Training
The detailed statistics of domain corpora for posttraining is listed in the Table 7 and downstream tasks in Table 8.
## C Latency
In Table 9, we show the detailed latency of WikiBERT and PlugLM.
## D Case Study
We show three concrete examples from QNLI and ACL-ARC in Table 13,14,15.
## E More Experiments For Tuning Pluglm
In Table 10, we show more results in Section 5.4 on STS-b, MRPC and QNLI.
| WikiBERT | PlugLM All | PlugLM Fuse | PlugLM | |
|------------|--------------|---------------|----------|-------|
| STS-B | 88.64 | 86.82 | 89.20 | 89.10 |
| MRPC | 88.85 | 87.42 | 91.27 | 91.54 |
| QNLI | 90.66 | 88.19 | 91.36 | 91.28 |
Table 10: Experimental Results as in Section 5.4 on STS-b, MRPC and QNLI. The evaluation metrics are Spearman correlation, F1 score and Accuracy respectively.
Table 7: List of the domain-specific unlabeled datasets.
| Domain | Pretraining Corpus | # Tokens | Size |
|----------|------------------------------------------------------|------------|--------|
| BIOMED | 1.24M papers from S2ORC (Lo et al., 2020) | 2.67B | 12GB |
| CS | 5.07M papers from S2ORC (Lo et al., 2020) | 4.3B | 18GB |
| NEWS | 11.90M articles from REALNEWS (Zellers et al., 2019) | 6.66B | 39GB |
| REVIEWS | 24.75M AMAZON reviews (He and McAuley, 2016) | 2.11B | 11GB |
Table 8: Specifications of the various target task datasets. † indicates high-resource settings. Sources: CHEMPROT
(Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017), ACL-ARC (Jurgens et al., 2018), SCIERC (Luan et al., 2018), HYPERPARTISAN (Kiesel et al., 2019), AGNEWS (Zhang et al., 2015), HELPFULNESS (McAuley et al., 2015), IMDB (Maas et al., 2011).
| Domain | Task | Label Type | Train (Lab.) | Dev. | Test | Classes |
|----------|-------------------------|-------------------------|----------------|--------|--------|-----------|
| BIOMED | CHEMPROT | relation classification | 4169 | 2427 | 3469 | 13 |
| †RCT | abstract sent. roles | 18040 | 30212 | 30135 | 5 | |
| CS | ACL-ARC | citation intent | 1688 | 114 | 139 | 6 |
| SCIERC | relation classification | 3219 | 455 | 974 | 7 | |
| NEWS | HYPERPARTISAN | partisanship | 515 | 65 | 65 | 2 |
| †AGNEWS | topic | 115000 | 5000 | 7600 | 4 | |
| REVIEWS | †HELPFULNESS | review helpfulness | 115251 | 5000 | 25000 | 2 |
| † IMDB | review sentiment | 20000 | 5000 | 25000 | 2 | |
Table 9: Testing Latency of WikiBERT and PlugLM measured by seconds. All experiments are computed in the same computational device with same batch size. The CPU is AMD EPYC 7K62 48-Core Processor. GPU is A100-SXM4. Driver Version is 450.156.00. CUDA Version is 11.1.
Table 11: List of the PubMed and Wikipedia.
| RTE | COLA | MRPC | STS-B | SST-2 | QNLI | QQP | MNLI-(m/mm) | |
|----------|----------|----------|---------|----------|----------|----------|---------------|-------------|
| Size | 0.27K | 1.04K | 0.41K | 1.5K | 0.87K | 5.47K | 40.43K | 9.82K/9.83K |
| Metrics | Accuracy | Matthews | F1 | Spearman | Accuracy | Accuracy | Accuracy | Accuracy |
| WikiBERT | 1.01 | 1.98 | 1.33 | 2.43 | 1.75 | 7.01 | 52.32 | 15.03/15.02 |
| PlugLM | 1.73 | 4.41 | 2.22 | 5.94 | 3.86 | 20.01 | 141.15 | 34.60/34.58 |
| Dataset | Domain | Source | Size |
|-----------|------------|---------------------------------------------|---------|
| Wikipedia | General | https://dumps.wikimedia.org | 14.35GB |
| PubMed | Biomedical | https://github.com/naver/biobert-pretrained | 28.12GB |
| Question | Answer | Prediction | Label |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|------------|
| How | much | | |
| of | Jack | | |
| sonville | is | | |
| made up of water? | According to the United States Census Bureau, the city has a total area of 874.3 square miles (2,264 km2 ), making Jacksonville the largest city in land area in the contiguous United States; of this, 86.66% (757.7 sq mi or 1,962 km2 ) is land and ; 13.34% (116.7 sq mi or 302 km2 ) is water. | Entailment | Entailment |
| (1) this article lists the 3, 143 states of america. the 50 states of the united states are divided into 3, 007 " counties ", political and geographic subdivisions of a state ; 236 other local governments and geographic places are also first - order administrative divisions of their respective state / district / territory, but are called by different names. the latter are referred to collectively as " county equivalents " by the united states census bureau. the 236 county equivalents include 100 equivalents in the territories ( such as those in puerto rico ) outside the 50 states and the district of columbia. the large majority of counties and equivalents were organized by 1970. since that time, most creations, boundary changes and dissolutions have occurred in alaska and virginia. among the 50 states, 44 are partitioned entirely into counties, with no county equivalents. louisiana is instead divided into 64 equivalent parishes. (2) the united states census bureau ( usc ##b ) , officially the bureau of the census , is a principal agency of the u . s . federal statistical system , responsible for producing data about the american people and economy . the census bureau is part of the u . s . department of commerce and its director is appointed by the president of the united states . the census bureau ' s primary mission is conducting the u . s . census every ten years , which all ##oca ##tes the seats of the u . s . house of representatives to the states based on their population . [ 1 ] the bureau ' s various census ##es and surveys help all ##oca ##te over $ 67 ##5 billion in federal funds every year and it assists states , local communities , and businesses make informed decisions . [ 2 ] [ 3 ] [ 4 ] the information provided by the census informs decisions on where to build and maintain schools , hospitals , transportation infrastructure , and police and fire departments (3) the crestview - fort walton beach - destin, florida, metropolitan statistical area, as defined by the united states census bureau, is a metropolitan area consisting of two counties in northwest florida, anchored by the cities of crestview, florida, and fort walton beach, florida. as of the 2010 census, the msa had a population of 235, 865, and a 2012 population estimate of 247, 665. the metropolitan area is a part of the " northwest corridor " which includes the pensacola metropolitan area and the panama city metropolitan area. demographics. as of the census of 2010, there were 235, 865 people, 95, 892 households, and 63, 964 families residing within the msa. the racial makeup of the msa was 81. 1 % white, 9. 3 % african american, 0. 3 % native american, 2. 9 % asian, 0. 1 % pacific islander, 0. 2 % from other races, and 3. 9 % from two or more races. hispanic or latino of any race were 6. 8 % of the population. according to the 2010 american community survey 1 - year (4) analog to digital conversions were achieved through steinberg, and in some cases mytek, converters. the album was recorded and mixed exclusively with steinberg cubase digital audio workstations on microsoft windows operating systems with waves ssl and abbey road tg12413 plugins. it was revealed that neither brahm nor marc know how to operate autotune, so it was not used. the songs were often performed to a click track, but there was no " snapping the drums to a grid ", which is a popular computerized technique to ensure that drums are in perfect time while simultaneously sucking the life out of an otherwise real performance. production. " tears of the enchanted mainframe " was produced and engineered by taylor and kaducak. backmasking is used on the track " superusurper " during an interlude that features a reversed reading of a passage from the george orwell novel nineteen eighty four. the album was mastered by geoff pesche and alex wharton at abbey road studios in london. title and artwork. " tears of the enchanted mainframe " (5) the zafarnama (, lit. " book of victory " ) is a biography of timur written by the historian nizam ad - din shami. it served as the basis for a later and better - known " zafarnama " by sharaf ad - din ali yazdi. one translation by felix tauer was published in prague in 1937. | | | |
| Knowledge | Table 13: Example from QNLI dataset. | | |
| Input | Prediction | Label |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|------------|
| Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ). | Background | Background |
| the engineering discipline that applies control theory to design systems with desired behaviors. control engineers are responsible for the research, design, and development of control devices and systems, typically in manufacturing facilities and process plants. control methods employ sensors to measure the output variable of the device and provide feedback to the controller so that it can make corrections toward desired performance. automatic control manages a device without the need of human inputs for correction, such as cruise control for regulating a car's speed. control systems engineering activities are multi - disciplinary in nature. they focus on the implementation of control systems, mainly derived by mathematical modeling. because instrumentation and control play a significant role in gathering information from a system and changing its parameters, they are a key part of control loops. as profession. high demand for engineering professionals is found in fields associated with process automation. specializations include industrial instrumentation, system dynamics, process control, and control systems. additionally, technological knowledge, particularly in computer systems, is essential to the job of (2) instrumentation is the art and science of measurement and control. instrumentation may also refer to: (3) the scientific and technological innovation ability of colleges and universities, and strengthening the evaluation research of the scientific and technological innovation ability and efficiency of colleges and universities, can we better promote the scientific and technological innovation ability of colleges and universities. universities the evaluation of scientific and technological innovation ability in colleges and universities is a complex system engineering, and the understanding of its connotation is the most important problem to be considered in the comprehensive evaluation. by consulting the data, it is found that the previous researches are mainly focused on the following three aspects : 1. from the perspective of innovative resource demand and innovative achievements, the scientific and technological innovation in colleges and universities is regarded as an organic whole composed of various elements. in the whole innovation system, colleges and universities undertake the functions and tasks of knowledge production and dissemination, technological innovation and transformation as well as personnel training. according to the relationship between innovation elements, the scientific and technological innovation ability of colleges and universities is divided into basic strength of scientific and technological innovation, scientific and technological innovation input ability, knowledge innovation ability, technological innovation ability, scientific and technological innovation output ability. science and technology innovation achievement transformation ability, talent innovation ability. 2. from the perspective of innovation process, the ability of scientific and technological innovation in colleges and universities is embodied in the process of knowledge creation, knowledge dissemination, transformation and diffusion of technological inventions. it also includes the technological, economic and managerial abilities that the university relies on (4) automation engineering has two different meanings : automation engineer. automation engineers are experts who have the knowledge and ability to design, create, develop and manage machines and systems, for example, factory automation, process automation and (5) this learning methodology is called blended learning. blended learning can also incorporate machine learning and other such technologies to implement adaptive learning. |
|-----------|
| Knowledge |
Table 14: Example from ACL-ARC dataset.
14305 Input Prediction Label Although there are other discussions of the paragraph as a central element of discourse ( e.g. Chafe 1979 , Halliday and Hasan 1976 , Longacre 1979 , Haberlandt et al. 1980 ) , all of them share a certain limitation in their formal techniques for analyzing paragraph structure .
CompareOrContrast CompareOrContrast
(1) automation engineering has two different meanings : automation engineer. automation engineers are experts who have the knowledge and ability to design, create, develop and manage machines and systems, for example, factory automation, process automation and warehouse automation. scope. automation engineering is the integration of standard engineering fields. automatic control of various control system for operating various systems or machines to reduce human efforts & amp ; time to increase accuracy. automation engineers design and service electromechanical devices and systems to high - speed robotics and programmable logic controllers ( plcs ). work and career after graduation. graduates can work for both government and private sector entities such as industrial production, companies that create and use automation systems, for example paper industry, automotive industry, food and agricultural industry, water treatment, and oil & amp ; gas sector such as refineries, power plants. job description. automation engineers can design, program, simulate and test automated machinery and processes, and usually are employed in industries such as the energy sector in plants, car manufacturing facilities or food processing plants and robots. automation engineers are responsible for creating detailed design specifications and other documents, developing automation based on specific requirements for the process involved, and conforming to international standards like iec - 61508, local standards, and other process specific guidelines and specifications, simulate, test and commission electronic equipment for automation.
(2) abstract. manipulator is a powerful tool which can help people to carry out the safe operation, production automation and improve the productivity of labor. based on the summary of the situation of research and development of manipulator, this article analyzes the functions of parts moving manipulator and carries out mechatronic design of parts moving manipulator according to the practical project items of parts moving manipulator of enterprises. on the basis of the analysis of the performance requirement and the operating characteristics of parts moving manipulator, this article analyses and designs the whole schemes for the mechanical structure, driving system, driving mode and the software and hardware control system of manipulator, and in which, the form of mechanical structure of cylindrical coordinate system is determined to be adopted in the design of manipulator, the driving scheme of pneumatic transmission is adopted, and the system control is carried out by plc. on this basis, this article analyses the kinematics and dynamics of parts moving manipulator and summarizes the relationship between displacement, speed, acceleration and joint angle. with the progress of science and technology and the development of social economy, the application area of manipulator has been becoming wider and wide. the manipulator can be found everywhere in human society. the application of manipulator has been extended to the civilian application fields such
(3) in working environments with large manipulators, accidental collisions can cause severe personal injuries and can seriously damage manipulators, necessitating the development of an emergency stop algorithm to prevent such occurrences. in this paper, we propose an emergency stop system for the efficient and safe operation of a manipulator by applying an intelligent emergency stop algorithm. our proposed intelligent algorithm considers the direction of motion of the manipulator. in addition, using a new regression method, the algorithm includes a decision step that determines whether a detected object is a collision - causing obstacle or a part of the manipulator. we apply our emergency stop system to a two - link manipulator and assess the performance of our intelligent emergency stop algorithm as compared with other models. increasing the safety of robots, especially industrial manipulators, is just as important as improving their performance. a collision between a manipulator and a person, for example, may cause severe personal injury as well as damage to the machinery. thus, it is necessary to develop an algorithm that can detect collisions before they occur and make the manipulator stop before damage is done. various emergency stop or obstacle avoidance algorithms for robots, particularly those utilizing distance - measuring sensors [ 1 ] [ 2 ] [
3 ] [ 4 ] or vision sensors have been reported [ 5 ] [ 6 ] [ 7 ] [ 8 ] and those algorithms using each
(4) the reliability of kinematic trajectory of manipulators describes the ability that manipulators keep kinematic accurate. it is an important parameter to evaluate the performance of manipulators. the kinematic accuracy of manipulators can be improved when piezoelectricity material are used as a transducer to suppress the vibration of flexible manipulators. first, a 3 degree - of - freedom parallel manipulator system and its dynamic equations are introduced. the theory and experiment of a vibration suppression system are then presented. the calculation method of both error and reliability of kinematic trajectory of manipulator is further implemented. finally, the reliability of kinematic accuracy are calculated and analyzed for the 3 degree - of - freedom parallel manipulator with or without vibration suppressing control. the results show that the reliability of kinematic accuracy is improved using vibration suppressing control. the reliability of kinematic accuracy of manipulators is an important indicator to evaluate the accuracy of manipulator motion [ 1 ]. in manipulators, light weight linkages are employed to achieve high speed and acceleration motions for better performance. however, the light weight linkage will result in inherent structural vibration, and the structural vibration leads to inaccurate kinematic trajectory of manipulators. different methods have been proposed to reduce the vibration of the flexible link
(5) abstract - economic dispatch and frequency regulation are typically viewed as fundamentally different problems in power systems and, hence, are typically studied separately. in this paper, we frame and study a joint problem that co -
optimizes both slow timescale economic dispatch resources and fast timescale frequency regulation resources. we show how the joint problem can be decomposed without loss of optimality into slow and fast timescale subproblems that have appealing interpretations as the economic dispatch and frequency regulation problems, respectively. we solve the fast timescale subproblem using a distributed frequency control algorithm that preserves network stability during transients. we solve the slow timescale subproblem using an efficient market mechanism that coordinates with the fast timescale subproblem. we investigate the performance of our approach on the ieee 24 - bus reliability test system. abstract - economic dispatch and frequency regulation are typically viewed as fundamentally different problems in power systems and, hence, are typically studied separately. in this paper, we frame and study a joint problem that co - optimizes both slow timescale economic dispatch resources and fast timescale frequency regulation resources. we show how the joint problem can be decomposed without loss of optimality into slow and fast timescale subproblems that have appealing interpretations as the economic dispatch and frequency regulation problems, respectively. we solve the fast timescale subproblem Knowledge
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
the last section A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Code Will Be Released When Published
✓ B1. Did you cite the creators of artifacts you used?
section 5
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section 5 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. appendix B
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
deng-etal-2023-goal | Goal Awareness for Conversational {AI}: Proactivity, Non-collaborativity, and Beyond | https://aclanthology.org/2023.acl-tutorials.1 | Conversational systems are envisioned to provide social support or functional service to human users via natural language interactions. Conventional conversation researches mainly focus on the responseability of the system, such as dialogue context understanding and response generation, but overlooks the design of an essential property in intelligent conversations, i.e., goal awareness. The awareness of goals means the state of not only being responsive to the users but also aware of the target conversational goal and capable of leading the conversation towards the goal, which is a significant step towards higher-level intelligence and artificial consciousness. It can not only largely improve user engagement and service efficiency in the conversation, but also empower the system to handle more complicated conversation tasks that involve strategical and motivational interactions. In this tutorial, we will introduce the recent advances on the design of agent{'}s awareness of goals in a wide range of conversational systems. |
## Goal Awareness For Conversational Ai: Proactivity, Non-Collaborativity, And Beyond
Yang Deng1, Wenqiang Lei2, Minlie Huang3**, Tat-Seng Chua**4 1 The Chinese University of Hong Kong 2Sichuan University 3 Tsinghua University 4 National University of Singapore [email protected] [email protected] [email protected] [email protected]
## 1 Introduction
Tutorial Description Conversational systems are envisioned to provide social support or functional service to human users via natural language interactions. Conventional conversation researches mainly focus on the response-ability of the system, such as dialogue context understanding and response generation, but overlooks the design of an essential property in intelligent conversations, *i.e.*, goal awareness. The awareness of goals means the state of not only being responsive to the users but also aware of the target conversational goal and capable of leading the conversation towards the goal, which is a significant step towards higher-level intelligence and artificial consciousness. It can not only largely improve user engagement and service efficiency in the conversation, but also empower the system to handle more complicated conversation tasks that involve strategical and motivational interactions. In this tutorial, we will introduce the recent advances on the design of agent's awareness of goals in a wide range of conversational systems.
Type of Tutorial Cutting-edge Targeted Audience Target audiences are researchers and practitioners who interested in natural language processing and human-computer interaction. The audience will learn about the stateof-the-art research in conversational AI and the cutting-edge designs of agent's awareness in various conversational systems.
Suggested Duration Half day (3 hours)
## 2 Tutorial Outline Part I: Preliminary (20 Minutes)
Conversational agents are generally envisioned to achieve the conversational goal by providing social support or functional service to human users via natural language interactions. In terms of the goal, Part I will present a brief overview of the widely-studied problems and correponding mainstream approaches in several typical conversational systems, including open-domain dialogue (ODD)
systems (Zhang et al., 2018a; Li et al., 2017; Roller et al., 2021), task-oriented dialogue (TOD) systems (Budzianowski et al., 2018; Lei et al., 2018; Su et al., 2022), conversational question answering
(CQA) systems (Choi et al., 2018; Reddy et al.,
2019; Anantha et al., 2021; Qiu et al., 2021), and conversational recommender systems (CRS) (Li et al., 2018; Deng et al., 2021; Wang et al., 2022).
## Part Ii: Proactive Conversational Systems (50 Minutes)
As opposed to responding to users, proactivity is the most prominent feature of goal awareness in conversational systems, which can improve the collaboration between the users and system towards the ultimate conversation goal. Derived from the definition of proactivity in organizational behaviors (Grant and Ashford, 2008) and its dictionary definitions (Dictionary, 1989), conversational agents' proactivity can be defined as the capability to create or control the conversation by taking the initiative and anticipating impacts on themselves or human users. In this part, we will provide a comprehensive introduction about such efforts on the design of agent's proactivity that span various task formulations and application scenarios. In specific, we categorize them in three directions according to the application scenario, and plan to discuss their research problems and methods as follows:
## - **Topic Shifting And Planning In Open-Domain**
Dialogues The goal of OOD systems is to maintain engaging social conversations with users.
Proactive OOD systems can consciously change topics (Rachna et al., 2021; Xie et al., 2021) and lead directions (Tang et al., 2019; Wu et al., 2019; Yang et al., 2022) for improving user engagement in the conversation. We will present the existing methods for topic shifting and planning in opendomain dialogues, including graph-based topic planning (Qin et al., 2020; Zhong et al., 2021; Xu et al., 2020; Ni et al., 2022), responding plan generation (Kishinami et al., 2022), and learning from interactions with users (Lei et al., 2022).
- **Additional Information Delivery in Taskoriented Dialogues** The goal of TOD systems is to provide functional service for users, such as making reservations or managing schedule. The proactivity in TOD systems is firstly defined as the capability of consciously providing additional information that is not requested by but useful to the users (Balaraman and Magnini, 2020a,b),
which can improve the quality and effectiveness of conveying functional service in the conversation. We will introduce the recent studies of proactive TOD systems with various designs. For instance, Sun et al. (2021) add topical chit-chats into the responses for TODs. Chen et al. (2022c)
enrich task-oriented dialogues with relevant entity knowledge.
- **Uncertainty Elimination in Informationseeking Dialogues** The goal of CIS systems (Zamani et al., 2022) is to fulfill the user's information needs and its typical applications include conversational search, conversational recommendation, and conversational question answering.
Conventional CIS systems assume that users always convey clear information requests, while the user queries, in reality, are often brief and succinct. Recent years have witnessed several advances on developing proactive CIS systems that can consciously eliminate the uncertainty for more efficient and precise information seeks by initiating a subdialogue. Such a subdialogue can either clarify the ambiguity of the query or question in conversational search (Aliannejadi et al., 2019, 2021; Zamani et al., 2020) and conversation question answering (Guo et al., 2021; Deng et al., 2022a), or elicit the user preference in conversational recommendation (Zhang et al.,
2018b; Lei et al., 2020a,b).
## Part Iii: Non-Collaborative Conversational Systems (40 Minutes)
Most of existing conversational systems are built upon the assumption that the users willingly collaborate with the conversational agent to reach the mutual goal. However, this assumption may not always hold in some real-world scenarios, where the users and the system do not share the same goal (He et al., 2018; Wang et al., 2019) or the users are not willing to coordinate with the agent (Yang et al., 2019; Kim et al., 2022). In these cases, the conversational agent requires another feature of goal awareness, *i.e.*, non-collaborativity (Li et al.,
2020; Zhou et al., 2020), which means the capability of handling both in-goal and off-goal dialogues appropriately for ultimately leading back to the system's goal. In this part, we will categorize the non-collaborative settings into two groups as follows and cover their to-date work respectively.
- **The users and the system do not share the**
same goal. Typical applications include persuasion dialogues (Wang et al., 2019), negotiation dialogues (He et al., 2018; Chawla et al., 2021),
and anti-scam dialogues (Li et al., 2020). We will present the approaches for the system to consciously mitigate and resolve the conflict goals with users, including dialogue strategy learning (Dutt et al., 2021; Yamaguchi et al., 2021; Joshi et al., 2021), user personality modeling (Shi et al., 2021; Yang et al., 2021), and response style transfer (Mishra et al., 2022; Wu et al., 2021).
- **The users are not willing to coordinate with**
the agent. Example scenarios include calming down the emotional users before solving their problems (Liu et al., 2021b), managing the users' complaints before providing service (Yang et al.,
2019), and handling problematic content during the conversations (Kim et al., 2022). We will introduce the pioneering studies for the system to consciously deal with non-collaborative users during the conversation, including emotion cause analysis (Tu et al., 2022; Cheng et al., 2022), user satisfaction estimation (Liu et al., 2021a; Deng et al., 2022b), and safe response generation (Baheti et al., 2021; Ung et al., 2022).
## Part Iv: Multi-Goal Conversational Systems (30 Minutes)
All the aforementioned conversational systems assume that users always know what they want and the system solely targets at reaching a certain goal, such as chit-chat, question answering, recommendation, etc. The system with a higher level of agent's awareness of goals should also be capable of handling conversations with multiple and various goals. As for multi-goal conversational systems (Liu et al., 2022; Deng et al., 2022c), the agent is expected to consciously discover users' intentions and naturally lead user-engaged dialogues with multiple conversation goals. We will cover 2 the newly proposed problems in multi-goal conversational systems with their corresponding data resources (Sun et al., 2021; Zhao et al., 2022; Young et al., 2022; Chiu et al., 2022). Then we will discuss two problem settings of multi-goal conversational systems with corresponding state-ofthe-art approaches: (i) The goal sequence is predefined (Bai et al., 2021; Zhang et al., 2021b), and
(ii) The next goal needs to be predicted (Liu et al.,
2020; Chen et al., 2022b; Deng et al., 2022c).
## Part V: Open Challenges For Conversational Agents' Awareness And Beyond (40 Minutes)
In the last part, we will discuss the main open challenges in developing agent's awareness in conversational systems and several potential research directions for future studies.
- **Evaluation for Conversational Agent's Awareness** The development of robust evaluation protocols has already been a long-standing problem for different kinds of conversational systems (Zhang et al., 2021a; Peng et al., 2021; Li et al., 2022b). The evaluation for conversational agent's awareness is a more challenging problem, since it is involved the evaluation not only from the perspective of natural language, but also from the perspectives of human-computer interaction, sociology, psychology, etc. We will cover the latest studies for shedding some lights on this topic, inclusive of popular metrics such as goal completion and user satisfaction (Liu et al., 2020; Lei et al., 2022; Gupta et al., 2022), and modelbased methods such as user simulator (Zhang and Balog, 2020; Sekulic et al., 2022).
- **Ethics for Conversational Agent's Awareness**
Although existing designs of agent's awareness of goals in conversational systems generally aim at social goodness (Wang et al., 2019; Liu et al.,
2021b; Kim et al., 2022), it is inevitably a doubleedged sword that can be used for good or evil.
For responsible NLP researches, we will discuss several important aspects of ethical issues in conscious conversational systems: (i) Factuality: Factual incorrectness and hallucination of knowledge are common in conversational systems (Dziri et al., 2022; Honovich et al., 2021).
When enabling the conversational agent with awareness, it becomes more crucial to guarantee the factuality of the system-provided information (Chen et al., 2022a). (ii) Safety: Besides general dialogue safety problems, such as toxic language and social bias (Saveski et al., 2021; Barikeri et al., 2021), conscious conversational systems need to pay more attentions to the aggressiveness issue during the non-collaborative conversations (Kim et al., 2022; Hu et al., 2022).
(iii) Privacy: The privacy issue is overlooked in current studies on conversational systems (Li et al., 2022a; Shi et al., 2022), but the agent's awareness raises concerns about how these conversational systems handle personal information obtained from the users. Furthermore, we will introduce some recent released resources that can be adopted for studying this topic (Ziems et al.,
2022; Sun et al., 2022; Kim et al., 2022).
- **Agent's Awareness in LLM-based Conversational AI** Large Language Models (LLMs)
have been demonstrated to be powerful of handling various NLP tasks in the form of conversations, such as ChatGPT (Schulman et al.,
2022), LaMDA (Thoppilan et al., 2022), BlenderBot (Shuster et al., 2022), etc. However, these applications are typically designed to follow the user's instructions and intents. There are still several limitations that attribute to the lack of agent's awareness, such as passively providing randomlyguessed answers to ambiguous user queries, failing to refuse or handle problematic user requests that may exhibit harmful or biased conversations, etc. In addition, they also fall short of interacting under non-collaborative or system-oriented settings. Therefore, we will discuss the role of LLMs in goal awareness for conversational AI
with some latest studies (Huang et al., 2022; Ahn et al., 2022; Yao et al., 2022).
## 3 Presenters
Yang Deng is a final-year Ph.D. candidate in The Chinese University of Hong Kong. His research lies in natural language processing and information retrieval, especially for dialogue and QA systems. He has published over 20 papers at top venues such as ACL, EMNLP, SIGIR, WWW, TKDE, and TOIS.
Additional information is available at https://
dengyang17.github.io.
Wenqiang Lei is a Professor in Sichuan University. His research interests focus on conversational AI, including conversational recommendation, dialogue and QA systems. He has published relevant papers at top venues such as ACL, EMNLP,
KDD, SIGIR, TOIS, and received the ACM MM
2020 best paper award. He has given tutorials on the topic of conversational recommendation at RecSys 2021, SIGIR 2020, and co-organized special issues about conversational information seeking on ACM Trans. on Web. Specifically, his tutorial on SIGIR 2020 accepts over 1600 audiences, being one of the most popular tutorials in SIGIR 2020. Additional information is available at https://sites.google.com/
view/wenqianghome/home.
Minlie Huang is an Associate Professor with the Department of Computer Science and Technology, Tsinghua University. He has authored or coauthored more than 100 papers in premier conferences and journals (ACL, EMNLP, TACL, etc). His research interests include natural language processing, particularly in dialog systems, reading comprehension, and sentiment analysis. He is an editor of TACL, CL, TNNLS, the Area Chair or SAC of ACL/EMNLP for more than 10 times. He is the recipient of IJCAI 2018 distinguished paper award, a nominee of ACL 2019 best demo papers, and SIGDIAL 2020 best paper award. Additional information is available at http://coai.cs.
tsinghua.edu.cn/hml.
Tat-Seng Chua is the KITHCT Chair Professor with the School of Computing, National University of Singapore. His main research interest include multimedia information retrieval and social media analytics. He is the 2015 winner of the prestigious ACM SIGMM Technical Achievement Award and receives the best papers (or candidates)
over 10 times in top conferences (SIGIR, WWW, MM, etc). He serves as the general co-chair of top conferences multiple times (MM 2005, SIGIR 2008, WSDM 2023, etc), and the editors of multiple journals (TOIS, TMM, etc). He has given invited keynote talks at multiple top conferences, including the recent one on the topic of multimodal conversational search and recommendation. Additional information is available at https://www.chuatatseng.com/.
## 4 Reading Lists Previous Tutorials:
(Chen et al., 2017b) ACL 2017 - Deep Learning for Dialogue Systems;
(Su et al., 2018) NAACL 2018 - Deep Learning for Conversational AI;
(Gao et al., 2018) ACL 2018/SIGIR 2018 - Neural Approaches to Conversational AI;
(Gao et al., 2020) SIGIR 2020 - Recent Advances in Conversational Information Retrieval;
(Dalton et al., 2022) SIGIR 2022 - Conversational Information Seeking: Theory and Application.
Related Surveys or Book Chapters:
(Chen et al., 2017a) A Survey on Dialogue Systems:
Recent Advances and New Frontiers;
(Gao et al., 2019) Neural Approaches to Conversational AI; (Huang et al., 2020) Challenges in Building Intelligent Open-domain Dialog Systems;
(Zamani et al., 2022) Conversational Information Seeking;
(Gao et al., 2022) Neural Approaches to Conversational Information Retrieval;
(Yan et al., 2022) Deep Learning for Dialogue Systems: Chit-Chat and Beyond.
## 5 Other Tutorial Information
Breadth and Diversity Considerations According to the representative set of papers listed in the selected bibliography, the concerned work in this tutorial will contain only 10%-15% of work that involves at least one of the four presenters. The rest of the tutorial will present a comprehensive overview of the tutorial topic by discussing the related work as much as possible from other researchers. The discussed approaches are problemdriven and language-agnostic, which means that the introduced content are generally applicable to all languages. The techniques are also not limited to a certain type of dialogues and can be generalized to diverse conversational systems. We have a diverse background for the presenters across multiple institutions in different regions.
Ethical Considerations Artificial consciousness is a broad and essential topic towards "Strong AI"
in the whole AI community (Searle, 1992), which can and should be used for social goodness, but inevitably comes with potential risks. In fact, the awareness of goals is just one of the cognitive aspects of consciousness (Baars, 1993). As part of this tutorial, we will provide a specific section for discussing the ethical considerations and designs for agent's awareness in conversational systems.
This tutorial also provides the opportunity to arouse discussions on how far we can and should go for agent's consciousness in conversational AI from the view of ethical and responsible NLP researches.
Open Access of Materials All tutorial materials will be made publicly available.
## References
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J.
Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, and Mengyuan Yan.
2022. Do as I can, not as I say: Grounding language in robotic affordances. *CoRR*, abs/2204.01691.
Mohammad Aliannejadi, Julia Kiseleva, Aleksandr Chuklin, Jeff Dalton, and Mikhail S. Burtsev. 2021.
Building and evaluating open-domain dialogue corpora with clarifying questions. In *EMNLP 2021*,
pages 4473–4484.
Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W. Bruce Croft. 2019. Asking clarifying questions in open-domain information-seeking conversations. In *SIGIR 2019*, pages 475–484.
Raviteja Anantha, Svitlana Vakulenko, Zhucheng Tu, Shayne Longpre, Stephen Pulman, and Srinivas Chappidi. 2021. Open-domain question answering goes conversational via question rewriting. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2021, pages 520–534.
Bernard J Baars. 1993. *A cognitive theory of consciousness*. Cambridge University Press.
Ashutosh Baheti, Maarten Sap, Alan Ritter, and Mark O.
Riedl. 2021. Just say no: Analyzing the stance of neural dialogue generation in offensive contexts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP
2021, pages 4846–4862.
Jiaqi Bai, Ze Yang, Xinnian Liang, Wei Wang, and Zhoujun Li. 2021. Learning to copy coherent knowledge for response generation. In Thirty-Fifth AAAI
Conference on Artificial Intelligence, AAAI 2021, pages 12535–12543.
Vevake Balaraman and Bernardo Magnini. 2020a. Investigating proactivity in task-oriented dialogues. In Proceedings of the Seventh Italian Conference on Computational Linguistics, CLiC-it 2020, volume 2769.
Vevake Balaraman and Bernardo Magnini. 2020b.
Proactive systems and influenceable users: Simulating proactivity in task-oriented dialogues. In Proc. 24th Workshop Semantics Pragmatics Dialogue
(SEMDIAL), pages 1–12.
Soumya Barikeri, Anne Lauscher, Ivan Vulic, and Goran Glavas. 2021. Redditbias: A real-world resource for bias evaluation and debiasing of conversational language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, pages 1941–1955.
Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz - A largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026.
Kushal Chawla, Jaysa Ramirez, Rene Clever, Gale M.
Lucas, Jonathan May, and Jonathan Gratch. 2021.
Casino: A corpus of campsite negotiation dialogues for automatic negotiation systems. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, NAACL-HLT 2021, pages 3167–3185.
Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017a. A survey on dialogue systems: Recent advances and new frontiers. *SIGKDD Explor.*,
19(2):25–35.
Maximillian Chen, Weiyan Shi, Feifan Yan, Ryan Hou, Jingwen Zhang, Saurav Sahay, and Zhou Yu. 2022a.
Seamlessly integrating factual information and social content with persuasive dialogue. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, AACL/IJCNLP 2022, pages 399–413.
Yun-Nung Chen, Asli Celikyilmaz, and Dilek HakkaniTür. 2017b. Deep learning for dialogue systems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 8–14.
Zhi Chen, Lu Chen, Bei Chen, Libo Qin, Yuncong Liu, Su Zhu, Jian-Guang Lou, and Kai Yu. 2022b. Unidu:
Towards A unified generative dialogue understanding framework. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL 2022, pages 442–455.
Zhiyu Chen, Bing Liu, Seungwhan Moon, Chinnadhurai Sankar, Paul A. Crook, and William Yang Wang.
2022c. KETOD: knowledge-enriched task-oriented dialogue. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2581–
2593.
Yi Cheng, Wenge Liu, Wenjie Li, Jiashuo Wang, Ruihui Zhao, Bang Liu, Xiaodan Liang, and Yefeng Zheng.
2022. Improving multi-turn emotional support dialogue generation with lookahead strategy planning.
CoRR, abs/2210.04242.
Ssu Chiu, Maolin Li, Yen-Ting Lin, and Yun-Nung Chen. 2022. Salesbot: Transitioning from chit-chat to task-oriented dialogues. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics, ACL 2022, pages 6143–6158.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2174–2184.
Jeffrey Dalton, Sophie Fischer, Paul Owoicho, Filip Radlinski, Federico Rossetto, Johanne R. Trippas, and Hamed Zamani. 2022. Conversational information seeking: Theory and application. In *SIGIR '22:*
The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 3455–3458.
Yang Deng, Wenqiang Lei, Wenxuan Zhang, Wai Lam, and Tat-Seng Chua. 2022a. PACIFIC: towards proactive conversational question answering over tabular and textual data in finance. *CoRR*, abs/2210.08817.
Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, and Wai Lam. 2021. Unified conversational recommendation policy learning via graph-based reinforcement learning. In *SIGIR '21: The 44th International ACM*
SIGIR Conference on Research and Development in Information Retrieval, pages 1431–1441.
Yang Deng, Wenxuan Zhang, Wai Lam, Hong Cheng, and Helen Meng. 2022b. User satisfaction estimation with sequential dialogue act modeling in goaloriented conversational systems. In *WWW '22: The* ACM Web Conference 2022, pages 2998–3008.
Yang Deng, Wenxuan Zhang, Weiwen Xu, Wenqiang Lei, Tat-Seng Chua, and Wai Lam. 2022c. A
unified multi-task learning framework for multigoal conversational recommender systems. *CoRR*,
abs/2204.06923.
Oxford English Dictionary. 1989. Oxford english dictionary. *Simpson, Ja & Weiner, Esc*, 3.
Ritam Dutt, Sayan Sinha, Rishabh Joshi, Surya Shekhar Chakraborty, Meredith Riggs, Xinru Yan, Haogang Bao, and Carolyn P. Rosé. 2021. Resper: Computationally modelling resisting strategies in persuasive conversations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, pages 78–90.
Nouha Dziri, Sivan Milton, Mo Yu, Osmar R. Zaïane, and Siva Reddy. 2022. On the origin of hallucinations in conversational models: Is it the datasets or the models? In *Proceedings of the 2022 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, pages 5271–5285.
Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In *Proceedings* of ACL 2018, Tutorial Abstracts, pages 2–7.
Jianfeng Gao, Michel Galley, and Lihong Li. 2019. Neural approaches to conversational AI. Found. Trends Inf. Retr., 13(2-3):127–298.
Jianfeng Gao, Chenyan Xiong, and Paul Bennett. 2020.
Recent advances in conversational information retrieval. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, pages 2421–
2424.
Jianfeng Gao, Chenyan Xiong, Paul Bennett, and Nick Craswell. 2022. Neural approaches to conversational information retrieval. *CoRR*, abs/2201.05176.
Adam M Grant and Susan J Ashford. 2008. The dynamics of proactivity at work. *Research in organizational* behavior, 28:3–34.
Meiqi Guo, Mingda Zhang, Siva Reddy, and Malihe Alikhani. 2021. Abg-coqa: Clarifying ambiguity in conversational question answering. In *AKBC 2021*.
Prakhar Gupta, Harsh Jhamtani, and Jeffrey P. Bigham.
2022. Target-guided dialogue response generation using commonsense and data augmentation. In *Findings of the Association for Computational Linguistics:*
NAACL 2022, pages 1301–1317.
He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 2333–2343.
Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021.
$qˆ2$: Evaluating factual consistency in knowledgegrounded dialogues via question generation and question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, pages 7856–7870.
Zhiqiang Hu, Roy Ka-Wei Lee, and Nancy F. Chen.
2022. Are current task-oriented dialogue systems able to satisfy impolite users? *CoRR*,
abs/2210.12942.
Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020.
Challenges in building intelligent open-domain dialog systems. *ACM Trans. Inf. Syst.*, 38(3):21:1–
21:32.
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022. Language models as zeroshot planners: Extracting actionable knowledge for embodied agents. In *International Conference on* Machine Learning, ICML 2022, volume 162, pages 9118–9147.
Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan W. Black, and Yulia Tsvetkov. 2021.
Dialograph: Incorporating interpretable strategygraph networks into negotiation dialogues. In 9th International Conference on Learning Representations, ICLR 2021.
Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, and Maarten Sap. 2022. Prosocialdialog: A prosocial backbone for conversational agents. *CoRR*,
abs/2205.12688.
Yosuke Kishinami, Reina Akama, Shiki Sato, Ryoko Tokuhisa, Jun Suzuki, and Kentaro Inui. 2022.
Target-guided open-domain conversation planning.
In *COLING 2022*, pages 660–668.
Wenqiang Lei, Xiangnan He, Yisong Miao, Qingyun Wu, Richang Hong, Min-Yen Kan, and Tat-Seng Chua. 2020a. Estimation-action-reflection: Towards deep interaction between conversational and recommender systems. In *WSDM 2020*, pages 304–312.
Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, pages 1437–
1447.
Wenqiang Lei, Gangyi Zhang, Xiangnan He, Yisong Miao, Xiang Wang, Liang Chen, and Tat-Seng Chua.
2020b. Interactive path reasoning on graph for conversational recommendation. In *KDD '20: The 26th* ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2073–2083.
Wenqiang Lei, Yao Zhang, Feifan Song, Hongru Liang, Jiaxin Mao, Jiancheng Lv, Zhenglu Yang, and TatSeng Chua. 2022. Interacting with non-cooperative user: A new paradigm for proactive dialogue policy.
In SIGIR '22: The 45th International ACM SIGIR
Conference on Research and Development in Information Retrieval, pages 212–222.
Haoran Li, Yangqiu Song, and Lixin Fan. 2022a. You don't know my favorite color: Preventing dialogue representations from revealing speakers' private personas. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, pages 5858–5870.
Huihan Li, Tianyu Gao, Manan Goenka, and Danqi Chen. 2022b. Ditch the gold standard: Re-evaluating conversational question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022, pages 8074–
8085.
Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal.
2018. Towards deep conversational recommendations. In *Advances in Neural Information Processing* Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, pages 9748–9758.
7 Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In *Proceedings of the Eighth International Joint Conference on* Natural Language Processing, IJCNLP 2017, pages 986–995.
Yu Li, Kun Qian, Weiyan Shi, and Zhou Yu. 2020. Endto-end trainable non-collaborative dialog system. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pages 8293–8302.
Jiawei Liu, Kaisong Song, Yangyang Kang, Guoxiu He, Zhuoren Jiang, Changlong Sun, Wei Lu, and Xiaozhong Liu. 2021a. A role-selected sharing network for joint machine-human chatting handoff and service satisfaction analysis. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, pages 9731–9741.
Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021b. Towards emotional support dialog systems. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, pages 3469–3483.
Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, and Ting Liu. 2020. Towards conversational recommendation over multi-type dialogs. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, pages 1036–1049.
Zeming Liu, Jun Xu, Zeyang Lei, Haifeng Wang, ZhengYu Niu, and Hua Wu. 2022. Where to go for the holidays: Towards mixed-type dialogs for clarification of user goals. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics,*
ACL 2022, pages 1024–1034.
Kshitij Mishra, Azlaan Mustafa Samad, Palak Totala, and Asif Ekbal. 2022. PEPDS: A polite and empathetic persuasive dialogue system for charity donation. In *Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022*,
pages 424–440.
Jinjie Ni, Vlad Pandelea, Tom Young, Haicang Zhou, and Erik Cambria. 2022. Hitkg: Towards goaloriented conversations via multi-hierarchy learning.
In *Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022*, pages 11112–11120.
Baolin Peng, Chunyuan Li, Zhu Zhang, Chenguang Zhu, Jinchao Li, and Jianfeng Gao. 2021. RADDLE:
an evaluation benchmark and analysis platform for robust task-oriented dialog systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, pages 4418–4429.
Jinghui Qin, Zheng Ye, Jianheng Tang, and Xiaodan Liang. 2020. Dynamic knowledge routing network for target-guided open-domain conversation. In AAAI
2020, pages 8657–8664.
Minghui Qiu, Xinjing Huang, Cen Chen, Feng Ji, Chen Qu, Wei Wei, Jun Huang, and Yin Zhang. 2021. Reinforced history backtracking for conversational question answering. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, pages 13718–
13726.
Konigari Rachna, Saurabh Ramola, Vijay Vardhan Alluri, and Manish Shrivastava. 2021. Topic shift detection for mixed initiative response. In *SIGdial 20211*,
pages 161–166.
Siva Reddy, Danqi Chen, and Christopher D. Manning.
2019. Coqa: A conversational question answering challenge. *Trans. Assoc. Comput. Linguistics*, 7:249– 266.
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021,,
pages 300–325.
Martin Saveski, Brandon Roy, and Deb Roy. 2021. The structure of toxic conversations on twitter. In WWW
'21: The Web Conference 2021, pages 1086–1097.
J Schulman, B Zoph, C Kim, J Hilton, J Menick, J Weng, JFC Uribe, L Fedus, L Metz, M Pokorny, et al. 2022.
Chatgpt: Optimizing language models for dialogue.
John R Searle. 1992. *The rediscovery of the mind*. MIT
press.
Ivan Sekulic, Mohammad Aliannejadi, and Fabio Crestani. 2022. Evaluating mixed-initiative conversational search systems via user simulation. In WSDM
'22: The Fifteenth ACM International Conference on Web Search and Data Mining, pages 888–896.
Weiyan Shi, Aiqi Cui, Evan Li, Ruoxi Jia, and Zhou Yu. 2022. Selective differential privacy for language modeling. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, pages 2848–2859.
Weiyan Shi, Yu Li, Saurav Sahay, and Zhou Yu. 2021.
Refine and imitate: Reducing repetition and inconsistency in persuasion dialogues via reinforcement learning and human demonstration. In *Findings of the* Association for Computational Linguistics: EMNLP
2021, pages 3478–3492.
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, William Ngan, Spencer Poff, Naman Goyal, Arthur Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston. 2022. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. *CoRR*, abs/2208.03188.
Pei-Hao Su, Nikola Mrkšic, Iñigo Casanueva, and Ivan ´
Vulic. 2018. ´ Deep learning for conversational AI.
In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts, pages 27–32.
Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2022. Multi-task pre-training for plug-and-play task-oriented dialogue system. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022, pages 4661–4676.
Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, and Minlie Huang. 2022. On the safety of conversational models: Taxonomy, dataset, and benchmark. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3906–3923.
Kai Sun, Seungwhan Moon, Paul A. Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, and Claire Cardie. 2021.
Adding chit-chat to enhance task-oriented dialogues.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, pages 1570–1583.
Jianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiaodan Liang, Eric P. Xing, and Zhiting Hu. 2019.
Target-guided open-domain conversation. In ACL
2019, pages 5624–5634.
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Kathleen S.
Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise AgueraArcas, Claire Cui, Marian Croak, Ed H. Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications. *CoRR*, abs/2201.08239.
Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, and Rui Yan. 2022. MISC: A mixed strategyaware model integrating COMET for emotional support conversation. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics, ACL 2022, pages 308–319.
Megan Ung, Jing Xu, and Y-Lan Boureau. 2022. Saferdialogues: Taking feedback gracefully after conversational safety failures. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics, ACL 2022, pages 6462–6481.
Lingzhi Wang, Shafiq R. Joty, Wei Gao, Xingshan Zeng, and Kam-Fai Wong. 2022. Improving conversational recommender system via contextual and timeaware modeling with less domain-specific knowledge.
CoRR, abs/2209.11386.
Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Persuasion for good: Towards a personalized persuasive dialogue system for social good. In *Proceedings of* the 57th Conference of the Association for Computational Linguistics, ACL 2019, pages 5635–5649.
Qingyang Wu, Yichi Zhang, Yu Li, and Zhou Yu. 2021.
Alternating recurrent dialog model with large-scale pre-trained language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, pages 1292–1301.
Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, and Haifeng Wang.
2019. Proactive human-machine conversation with explicit conversation goal. In *ACL 2019*, pages 3794– 3804.
Huiyuan Xie, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu, and Ann A. Copestake. 2021. TIAGE: A benchmark for topic-shift aware dialog modeling. In *Findings of ACL: EMNLP 2021*, pages 1684–1690.
Jun Xu, Haifeng Wang, Zhengyu Niu, Hua Wu, and Wanxiang Che. 2020. Knowledge graph grounded goal planning for open-domain conversation generation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pages 9338–9345.
Atsuki Yamaguchi, Kosui Iwasa, and Katsuhide Fujita.
2021. Dialogue act-based breakdown detection in negotiation dialogues. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, pages 745–757.
Rui Yan, Juntao Li, and Zhou Yu. 2022. Deep learning for dialogue systems: Chit-chat and beyond. *Foundations and Trends in Information Retrieval*, 15(5):417–
589.
Runzhe Yang, Jingxiao Chen, and Karthik Narasimhan.
2021. Improving dialog systems for negotiation with personality modeling. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP
2021, pages 681–693.
Wei Yang, Luchen Tan, Chunwei Lu, Anqi Cui, Han Li, Xi Chen, Kun Xiong, Muzi Wang, Ming Li, Jian Pei, and Jimmy Lin. 2019. Detecting customer complaint escalation with recurrent neural networks and manually-engineered features. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, NAACL-HLT 2019, pages 56–63.
Zhitong Yang, Bo Wang, Jinfeng Zhou, Yue Tan, Dongming Zhao, Kun Huang, Ruifang He, and Yuexian Hou. 2022. Topkg: Target-oriented dialog via global planning on knowledge graph. In *Proceedings of* the 29th International Conference on Computational Linguistics, COLING 2022, pages 745–755.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language models. *CoRR*, abs/2210.03629.
Tom Young, Frank Xing, Vlad Pandelea, Jinjie Ni, and Erik Cambria. 2022. Fusing task-oriented and open-domain dialogues in conversational agents. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, pages 11622–11629.
Hamed Zamani, Susan T. Dumais, Nick Craswell, Paul N. Bennett, and Gord Lueck. 2020. Generating clarifying questions for information retrieval. In WWW 2020, pages 418–428.
Hamed Zamani, Johanne R. Trippas, Jeff Dalton, and Filip Radlinski. 2022. Conversational information seeking. *CoRR*, abs/2201.08808.
Chen Zhang, Yiming Chen, Luis Fernando D'Haro, Yan Zhang, Thomas Friedrichs, Grandee Lee, and Haizhou Li. 2021a. Dynaeval: Unifying turn and dialogue level evaluation. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP
2021, pages 5676–5689.
Jun Zhang, Yan Yang, Chencai Chen, Liang He, and Zhou Yu. 2021b. KERS: A knowledge-enhanced framework for recommendation dialog systems with multiple subgoals. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1092–1101.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, pages 2204–2213.
Shuo Zhang and Krisztian Balog. 2020. Evaluating conversational recommender systems via user simulation.
In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1512–1520.
Yongfeng Zhang, Xu Chen, Qingyao Ai, Liu Yang, and W. Bruce Croft. 2018b. Towards conversational search and recommendation: System ask, user respond. In *CIKM 2018*, pages 177–186.
Xinyan Zhao, Bin He, Yasheng Wang, Yitong Li, Fei Mi, Yajiao Liu, Xin Jiang, Qun Liu, and Huanhuan Chen.
2022. Unids: A unified dialogue system for chit-chat and task-oriented dialogues. In *Proceedings of the* Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, DialDoc@ACL 2022, pages 13–22.
Peixiang Zhong, Yong Liu, Hao Wang, and Chunyan Miao. 2021. Keyword-guided neural conversational model. In *Thirty-Fifth AAAI Conference on Artificial* Intelligence, AAAI 2021, pages 14568–14576.
Yiheng Zhou, Yulia Tsvetkov, Alan W. Black, and Zhou Yu. 2020. Augmenting non-collaborative dialog systems with explicit semantic and strategic dialog history. In 8th International Conference on Learning Representations, ICLR 2020.
Caleb Ziems, Jane A. Yu, Yi-Chia Wang, Alon Y.
Halevy, and Diyi Yang. 2022. The moral integrity corpus: A benchmark for ethical dialogue systems.
In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022*,
pages 3755–3773. |
zhao-etal-2023-complex | Complex Reasoning in Natural Language | https://aclanthology.org/2023.acl-tutorials.2 | Teaching machines to reason over texts has been a long-standing goal of natural language processing (NLP). To this end, researchers have designed a diverse set of complex reasoning tasks that involve compositional reasoning, knowledge retrieval, grounding, commonsense reasoning, etc. A standard choice for building systems that perform a desired type of reasoning is to fine-tune a pretrained language model (LM) on specific downstream tasks. However, recent research has demonstrated that such a straightforward approach is often brittle. For example, Elazar et al. (2021) and Branco et al. (2021) show that, on question-answering (QA) tasks, similar performance can be achieved with questions removed from the inputs. Min et al. (2019), Chen and Durrett (2019), and Tang et al. (2021) show that models trained on multi-hop QA do not generalize to answer single-hop questions. The reasoning capabilities of these models thus remain at a surface level, i.e., exploiting data patterns. Consequently, augmenting LMs with techniques that make them robust and effective becomes an active research area. We will start the tutorial by providing an overview of complex reasoning tasks where the standard application of pretrained language models fails. This tutorial then reviews recent promising directions for tackling these tasks. Specifically, we focus on the following groups of approaches that explicitly consider problem structures: (1) knowledge-augmented methods, where the knowledge is either incorporated during fine-tuning or pretraining; (2) few-shot prompting methods, which effectively guide the models to follow instructions; (3) neuro-symbolic methods, which produce explicit intermediate representations; and, (4) rationale-based methods, one of the most popular forms of the neuro-symbolic methods, which highlight subsets of input as explanations for individual model predictions. | # Cutting-Edge Tutorial: Complex Reasoning Over Natural Language
Wenting Zhao Cornell University [email protected] Mor Geva∗
Google Research [email protected] Bill Yuchen Lin∗
Allen Institute for AI
[email protected] Michihiro Yasunaga∗
Stanford University [email protected] Aman Madaan∗
Carnegie Mellon University [email protected] Tao Yu∗
The University of Hong Kong [email protected]
## 1 Tutorial Overview
Teaching machines to reason over texts has been a long-standing goal of natural language processing (NLP). To this end, researchers have designed a diverse set of complex reasoning tasks that involve compositional reasoning (Geva et al., 2021; Trivedi et al., 2022), knowledge retrieval (Yang et al., 2018; Kwiatkowski et al., 2019), grounding (Budzianowski et al., 2018; Xie et al., 2022; Shi et al., 2021), commonsense reasoning (Talmor et al., 2021a; Lin et al., 2020), etc.
A standard choice for building systems that perform a desired type of reasoning is to fine-tune a pretrained language model (LM) on specific downstream tasks. However, recent research has demonstrated that such a straightforward approach is often brittle. For example, Elazar et al. (2021)
and Branco et al. (2021) show that, on questionanswering (QA) tasks, similar performance can be achieved with questions removed from the inputs. Min et al. (2019), Chen and Durrett (2019), and Tang et al. (2021) show that models trained on multi-hop QA do not generalize to answer singlehop questions. The reasoning capabilities of these models thus remain at a surface level, i.e., exploiting data patterns. Consequently, augmenting LMs with techniques that make them robust and effective becomes an active research area.
We will start the tutorial by providing an overview of complex reasoning tasks where the standard application of pretrained language models fails (in Sec 2). This tutorial then reviews recent promising directions for tackling these tasks (in Sec 3). Specifically, we focus on the following groups of approaches that explicitly consider problem structures: (1) knowledgeaugmented methods, where the knowledge is either incorporated during fine-tuning or pretraining; (2) few-shot prompting methods, which effec-
∗Equal Contribution.
tively guide the models to follow instructions; (3) neuro-symbolic methods, which produce explicit intermediate representations; and, (4) rationalebased methods, one of the most popular forms of the neuro-symbolic methods, which highlight subsets of input as explanations for individual model predictions. The tutorial materials are online at https://wenting-zhao.github. io/complex-reasoning-tutorial.
2 Problem Introduction We will start with NLP tasks that require reasoning over multiple pieces of information in a provided context, covering various reasoning skills such as fact composition, mathematical reasoning, inferring semantic structures, and reasoning about entities (Yang et al., 2018; Yu et al., 2018; Budzianowski et al., 2018; Dua et al., 2019; Ho et al., 2020; Dasigi et al., 2019; Cobbe et al., 2021; Trivedi et al., 2022). Then, we will discuss benchmarks that combine multiple sources of information (i.e., modalities), e.g., paragraphs, tables, and images (Chen et al., 2020b; Talmor et al., 2021b; Pasupat and Liang, 2015; Chen et al., 2020a).
We will present open-domain setups where external knowledge should be integrated into the reasoning process (Geva et al., 2021; Onoe et al., 2021; Ferguson et al., 2020; Talmor and Berant, 2018). In addition, we will review tasks that require commonsense reasoning (Talmor et al., 2021a; Rudinger et al., 2020; Sap et al., 2019; Saha et al., 2021).
We will conclude this part by highlighting key practices for dataset creation, that increase data diversity and minimize annotation biases and reasoning shortcuts (Bartolo et al., 2020; Khot et al.,
2020; Geva et al., 2019; Parmar et al., 2022).
## 3 Approaches
(1a) Knowledge-Augmented Fine-Tuning Tackling complex reasoning problems that require commonsense knowledge and entity-centric facts can 11 benefit from access to external knowledge sources. How to incorporate knowledge during fine-tuning has thus been extensively studied. A general method is to retrieving knowledge facts relevant to given situations (e.g., questions) and fusing them with an LM-based neural module. External knowledge can be categorized into three forms:
structured (e.g., knowledge graphs like ConceptNet (Speer et al., 2017)), unstructured (e.g., knowledge corpora such as Wikipedia and GenericsKB (Bhakthavatsalam et al., 2020)), and instancebased (i.e., annotated examples).
In this section, we will cover methods for these three forms of knowledge in a variety of reasoning problems. For structured knowledge, KagNet (Lin et al., 2019) is a typical method that focuses on fusing retrieved subgraphs from ConceptNet for fine-tuning LMs to perform commonsense reasoning. Follow-up works include MHGRN (Feng et al., 2020), QA-GNN (Yasunaga et al., 2021),
and GreaseLM (Zhang et al., 2022b). For unstructured knowledge, we will introduce methods that encode a large knowledge corpus as neural memory modules to support knowledge retrieval for reasoning. We will start with DPR (Karpukhin et al., 2020), one of the most popular methods that embed Wikipedia as a dense matrix of fact embeddings. Then, we will cover DrKIT (Dhingra et al., 2020), which improves multi-hop reasoning ability by encoding sparse entity mentions. Additionally, we introduce DrFact (Lin et al., 2021), a fact-level extension for DrKIT that focuses on commonsense reasoning. For instance-based knowledge, a promising direction, we will also introduce methods such as RACo (Yu et al., 2022b),
ReCross (Lin et al., 2022), and QEDB (Chen et al.,
2022b), which aim to exploit annotated examples to enhance reasoning.
(1b) Knowledge-Augmented Pretraining. Pretraining performs self-supervised learning of representations from large-scale data, which holds the potential to help a broader range of downstream tasks. We will review recent efforts to incorporate knowledge and reasoning abilities into LMs during pretraining. We first discuss retrieval-augmented pretraining (Guu et al., 2020; Lewis et al., 2020a; Borgeaud et al., 2021; Yasunaga et al., 2022b),
which retrieves relevant documents from an external memory and feeds them to the model as an additional input. This helps not only knowledgeintensive tasks but also some reasoning-intensive tasks because the models learn to process multiple documents for multi-hop reasoning (Yasunaga et al., 2022b). We then discuss works that integrate structured knowledge bases/graphs. For example, some use knowledge graphs to make additional pretraining objectives for LMs (Xiong et al., 2020; Shen et al., 2020; Wang et al., 2021; Liu et al., 2021; Yu et al., 2022a; Ke et al., 2021); others retrieve and feed entity or knowledge graph information as a direct input to the model (Zhang et al., 2019; Rosset et al., 2020; Liu et al., 2020; Sun et al., 2021; Agarwal et al., 2021; Sun et al., 2020; He et al.,
2020; Yasunaga et al., 2022a). Recent works show that these retrieved knowledge graphs can provide LMs with scaffolds for performing complex reasoning over entities, such as logical and multi-hop reasoning (Yasunaga et al., 2022a).
(2) Few-Shot Prompting Approaches. The rise of large pretrained LMs, such as GPT-3 (Brown et al., 2020), OPT (Zhang et al., 2022a), and PaLM (Chowdhery et al., 2022), has unlocked the potential of few-shot prompting methods for a wide range of reasoning tasks. However, despite their strengths, these LMs in the few-shot prompting mode have peculiar failure modes, especially when it comes to complex reasoning tasks (Marcus, 2022). Further, the *prompt* has to be designed carefully, and it has been shown that seemingly innocuous changes to the prompt (e.g., order of examples or the format of text) can drastically impact the performance (Le Scao and Rush, 2021; Mishra et al.,
2021). In response, several techniques have been developed to make few-shot prompting methods to be less susceptible to the exact prompt choice. This section will cover both a high-level overview of few-shot prompting and introduce specific classes of techniques that can further improve the few-shot prompting methods on complex reasoning tasks.
First, we will introduce prompt-design techniques like chain-of-thought prompting (Wei et al.,
2022b) and least-to-most prompting (Wei et al.,
2022c), which encourage an LM to generate reasoning steps as part of the solution, helping with problem decomposition and enhanced reasoning. Next, we will cover techniques that change the prompt dynamically for each input query. The methods covered in this part include selecting the training examples in the prompt (Liu et al., 2022a) and editing the prompt to incorporate feedback received on a similar-input (Madaan et al., 2022a).
Finally, we will cover techniques that leverage code-generation models for complex reasoning tasks. Representative techniques in this part will cover i) the use of code-generation model for structured commonsense reasoning (Madaan et al.,
2022b), ii) algorithmic reasoning by expanding detailed instructions in the prompt (Zhou et al.,
2022), and iii) generating chain-of-thought styled reasoning chains in Python code to tackle complex symbolic reasoning tasks (Gao et al., 2022).
(3) Neuro-Symbolic Approaches. Although performance on NLP tasks is dominated by neural *endto-end* systems that directly map inputs to outputs
(Devlin et al., 2019; Raffel et al., 2020), these approaches lack interpretability and robustness. *Symbolic* approaches, on the other hand, produce explicit intermediate reasoning trajectories such as logical forms, reasoning paths, or program code, which might then be executed to derive a final output (Zettlemoyer and Collins, 2005; Chen et al.,
2019b, *i.a.*). Compared to both end-to-end and chain-of-thought methods (Wei et al., 2022a, *i.a.*), the reasoning processes produced by the symbolic methods are interpretable, and the resulting execution makes them more robust to input changes.
Researchers (Andreas et al., 2016; Liang et al.,
2017; Gupta et al., 2019; Khot et al., 2021; Zhu et al., 2022; Cheng et al., 2022; Gao et al., 2022; Schick et al., 2023, *i.a.*) also propose to combine neural modules and symbolic components to leverage advantages of both approaches. More specifically, Neural-Symbolic Machines (Liang et al.,
2017) adopt a seq-to-seq model to generate programs and a Lisp interpreter that performs program execution. (Chen et al., 2019b) designs a domainspecific language for question answering over text. BREAK (Wolfson et al., 2020) proposes a meaningful representation, QDMR, that decomposes the question into multiple steps. Thorne et al. (2021)
propose a mixed pipeline of logic forms and neural networks, aiming at solving the scale problem and noisy, messy data over a natural language database.
Another stream of works called neural module networks (Andreas et al., 2016; Das et al., 2018; Gupta et al., 2019) propose to generate symbolic programs that are further softly executed by the corresponding neural modules. Khot et al. (2021) propose text module networks to solve complex tasks by decomposing them into simpler ones solvable by existing QA models and a symbolic calculator.
However, most prior neural-symbolic methods require the elaborate human design of the symbolic language and the calibration of corresponding neural modules to tackle problems in a specific domain with large training data. Recently, Cheng et al.
(2022) propose Binder, a new neural-symbolic system based on GPT-3 Codex (Chen et al., 2021) that supports *flexible* neural module calls that will enable *higher coverage* for the symbolic language, while only requiring *few annotations*. Also, Gao et al. (2022) introduce PAL, a new method based on Codex that generates executable programs as the intermediate reasoning steps and leverages a Python interpreter to derive final answers.
This section will begin by discussing the highlevel comparison among the end-to-end, chain-ofthought, symbolic (e.g., semantic parsing), and neural-symbolic approaches. We will then move to provide a high-level overview of different neuralsymbolic approaches. In this part, we will mainly focus on neural-symbolic approaches with LMs.
Finally, we will cover recent techniques incorporating GPT-3 Codex in neural-symbolic approaches.
(4) Rationale-Based Approaches. Rationalebased approaches extract parts of input to be *reasoning certificates*, offering end users a way to evaluate the trustworthiness of the predictions. Based on reasoning types, rationales of different granularity are identified - they can be tokens, sentences, or documents (DeYoung et al., 2020; Kwiatkowski et al., 2019). NLP systems can benefit from rationales in several ways. Yang et al. (2018) show that providing rationales as additional supervision improves models' capacity to perform multi-hop reasoning. More recently, Chen et al. (2022a) demonstrate the potential of using such methods to build more robust NLP systems.
Existing methods for extracting rationales often require supervision; they either apply multi-task loss functions (Joshi et al., 2020; Groeneveld et al., 2020), or design specialized network architectures to incorporate inductive biases (Tu et al., 2019; Fang et al., 2020). Because rationale annotations are expensive to collect and not always available, recent effort has been devoted to semi-supervised and unsupervised methods. Chen et al. (2019a) leverage entity taggers to build silver reasoning chains used for rationale supervision. Glockner et al. (2020) and Atanasova et al. (2022) design unsupervised objectives for extracting rationales in multi-hop QA systems. Finally, latent-variable approaches are a natural fit for unsupervised learning (Lei et al., 2016; Zhou et al., 2020; Lewis et al.,
2020b). By modeling rationales as a latent variable, it provides a principled way to explicitly impose constraints in the reasoning process.
## 3.1 Schedule
1. Introduction & Motivations (15 min.) 2. Benchmarks & Evaluation (25 min.) 3. Knowledge-augmented Fine-tuning (25 min.)
4. Knowledge-augmented Pretraining (25 min.)
5. Break (30 minutes) 6. Neuro-Symbolic Approaches (25 min.) 7. Few-shot Prompting Approaches (25 min.)
8. Rationale-Based Approaches (25 min.)
9. Concluding discussion (15 min.)
## 4 Instructor Information
Wenting Zhao is a Ph.D. student in Computer Science at Cornell University. Her research focuses on the intersection of reasoning and NLP.
She is especially interested in developing explainable methods for complex reasoning problems.
Mor Geva is a postdoctoral researcher, now at Google Research and previously at the Allen Institute for AI. Her research focuses on debugging the inner workings of black-box NLP models, to increase their transparency, control their operation, and improve their reasoning abilities. She is organizing the next edition of the Workshop on Commonsense Reasoning and Representation.
Bill Yuchen Lin is a postdoctoral researcher at the Allen Institute for AI. He obtained his Ph.D. at USC advised by Prof. Xiang Ren. His research goal is to teach machines to think, talk, and act with commonsense knowledge and commonsense reasoning ability as humans do. He was a co-author of the tutorial on Knowledge-Augmented Methods for Natural Language Processing and the *Workshop* on Commonsense Representation and Reasoning at ACL 2022.
Michihiro Yasunaga is a Ph.D. student in Computer Science at Stanford University. His research interest is in developing generalizable models with knowledge, including commonsense, science, and reasoning abilities. He co-organized the Workshop on Structured and Unstructured Knowledge Integration (SUKI) at NAACL 2022.
Aman Madaan is a Ph.D. student at the School of Computer Science, Carnegie Mellon University. He is interested in large language models, feedback-driven generation, and the intersection of code generation and natural language reasoning.
He helped organize the 1st and 2nd Workshops on Natural Language Generation, Evaluation, and Metrics (GEM) at ACL 2021 and EMNLP 2022.
Tao Yu is an assistant professor of computer science at The University of Hong Kong. He completed his Ph.D. at Yale University and was a postdoctoral fellow at the University of Washington. He works on executable language understanding, such as semantic parsing and code generation, and large LMs. Tao is the recipient of an Amazon Research Award. He co-organized multiple workshops in Semantic Parsing and Structured and Unstructured Knowledge Integration at EMNLP and NAACL.
## 5 Other Information
Reading List Rogers et al. (2022); Storks et al.
(2019); Liu et al. (2022b); Lyu et al. (2022); Wiegreffe and Marasovic´ (2021); Andreas et al. (2016);
Cheng et al. (2022); Gao et al. (2022).
Breadth We estimate that approximately 30% of the tutorial will center around work done by the presenters. This tutorial categorizes promising approaches for complex reasoning tasks into several groups, and each of this group includes a significant amount of other researchers' works.
Diversity considerations The challenges of building robust and generalizable NLP systems exist in every language. The methods covered in this tutorial are language-agnostic and can be extended to non-English context.
For instructors, they all have different affiliations (i.e., Cornell, Google, Stanford, USC, HKU,
and CMU). They are three PhD students, two postdoctoral researchers, and one assistant professor; two of the instructors are female.
Prerequisites Following knowledge is assumed:
- Machine Learning: basic probability theory, supervised learning, transformer models
- NLP: Familiarity with pretrained LMs; standard NLP tasks such as question answering, text generation, etc.
## Estimated Number Of Participants 150. Preferable Venue Acl.
Targeted audience Researchers and practitioners who seek to develop a background in complex reasoning tasks where standard application of pretrained language models fail. By providing a systematic overview of recent promising approaches for these tasks, this tutorial hopefully reveals new research opportunities to the audience.
## References
Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In *North American Chapter* of the Association for Computational Linguistics
(NAACL).
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 39–48.
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2022. Diagnosticsguided explanation generation. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 36, pages 10445–10453.
Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. 2020. Beat the AI:
Investigating adversarial human annotation for reading comprehension. *Transactions of the Association* for Computational Linguistics, 8:662–678.
Sumithra Bhakthavatsalam, Chloe Anastasiades, and Peter Clark. 2020. Genericskb: A knowledge base of generic statements. *ArXiv*, abs/2005.00660.
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2021. Improving language models by retrieving from trillions of tokens.
arXiv preprint arXiv:2112.04426.
Ruben Branco, António Branco, João António Rodrigues, and João Ricardo Silva. 2021. Shortcutted commonsense: Data spuriousness in deep learning of commonsense reasoning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1504–1521, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašic. 2018. ´ MultiWOZ - a largescale multi-domain Wizard-of-Oz dataset for taskoriented dialogue modelling. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics.
Howard Chen, Jacqueline He, Karthik Narasimhan, and Danqi Chen. 2022a. Can rationalization improve robustness? In Proceedings of the 2022 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, pages 3792–3805, Seattle, United States.
Association for Computational Linguistics.
Jifan Chen and Greg Durrett. 2019. Understanding dataset design choices for multi-hop reasoning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4026–4032, Minneapolis, Minnesota. Association for Computational Linguistics.
Jifan Chen, Shih-ting Lin, and Greg Durrett. 2019a.
Multi-hop question answering via reasoning chains.
arXiv preprint arXiv:1910.02610.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. arXiv preprint arXiv:2107.03374.
Wenhu Chen, William W. Cohen, Michiel de Jong, Nitish Gupta, Alessandro Presta, Pat Verga, and John Wieting. 2022b. Qa is the new kr: Question-answer pairs as knowledge bases. *ArXiv*, abs/2207.00630.
Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020a. Tabfact: A large-scale dataset for table-based fact verification. In *International Conference on Learning Representations*.
Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020b. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 1026–1036, Online. Association for Computational Linguistics.
Xinyun Chen, Chen Liang, Adams Wei Yu, Denny Zhou, Dawn Song, and Quoc V Le. 2019b. Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension.
In *International Conference on Learning Representations*.
Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2022. Binding language models in symbolic languages. *ArXiv*.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways. arXiv preprint arXiv:2204.02311.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. *arXiv preprint* arXiv:2110.14168.
Abhishek Das, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. 2018. Neural modular control for embodied question answering. In *Conference* on Robot Learning, pages 53–62. PMLR.
Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. ´
Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 5925–5932, Hong Kong, China. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to
evaluate rationalized NLP models. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443–4458, Online.
Association for Computational Linguistics.
Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, and William W. Cohen. 2020. Differentiable reasoning over a virtual knowledge base. In *International Conference on Learning Representations*.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics.
Yanai Elazar, Hongming Zhang, Yoav Goldberg, and Dan Roth. 2021. Back to square one: Artifact detection, training and commonsense disentanglement in the Winograd schema. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10486–10500, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yuwei Fang, Siqi Sun, Zhe Gan, Rohit Pillai, Shuohang Wang, and Jingjing Liu. 2020. Hierarchical graph network for multi-hop question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 8823–8838, Online. Association for Computational Linguistics.
Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multihop relational reasoning for knowledge-aware question answering. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1295–1309, Online. Association for Computational Linguistics.
James Ferguson, Matt Gardner, Hannaneh Hajishirzi, Tushar Khot, and Pradeep Dasigi. 2020. IIRC: A
dataset of incomplete information reading comprehension questions. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1137–1147, Online. Association for Computational Linguistics.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language models. *arXiv preprint arXiv:2211.10435*.
Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019.
Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference*
on Natural Language Processing (EMNLP-IJCNLP),
pages 1161–1166, Hong Kong, China. Association for Computational Linguistics.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346–
361.
Max Glockner, Ivan Habernal, and Iryna Gurevych.
2020. Why do you think that? exploring faithful sentence-level rationales without supervision. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1080–1095, Online.
Association for Computational Linguistics.
Dirk Groeneveld, Tushar Khot, Mausam, and Ashish Sabharwal. 2020. A simple yet strong pipeline for HotpotQA. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8839–8845, Online. Association for Computational Linguistics.
Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, and Matt Gardner. 2019. Neural module networks for reasoning over text. *arXiv preprint arXiv:1912.04971*.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. In *International Conference on Machine Learning (ICML)*.
Bin He, Di Zhou, Jinghui Xiao, Xin Jiang, Qun Liu, Nicholas Jing Yuan, and Tong Xu. 2020. Integrating graph contextualized knowledge into pre-trained language models. In *Findings of EMNLP*.
Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multihop QA dataset for comprehensive evaluation of reasoning steps. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 6609–6625, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. *Transactions of the Association for* Computational Linguistics, 8:64–77.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics.
Pei Ke, Haozhe Ji, Yu Ran, Xin Cui, Liwei Wang, Linfeng Song, Xiaoyan Zhu, and Minlie Huang. 2021.
Jointgt: Graph-text joint representation learning for text generation from knowledge graphs. In Findings of ACL.
Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A
dataset for question answering via sentence composition. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8082–8090.
Tushar Khot, Daniel Khashabi, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2021. Text modular networks: Learning to decompose tasks in the language of existing models. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1264–1279.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466.
Teven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2627–2636, Online. Association for Computational Linguistics.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.
Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117, Austin, Texas. Association for Computational Linguistics.
Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettlemoyer.
2020a. Pre-training via paraphrasing. In Advances in Neural Information Processing Systems (NeurIPS).
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. *Advances in Neural Information Processing Systems*, 33:9459–9474.
Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines:
Learning semantic parsers on Freebase with weak supervision. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), Vancouver, Canada. Association for Computational Linguistics.
Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 2829–2839, Hong Kong, China. Association for Computational Linguistics.
Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, and William Cohen. 2021. Differentiable open-ended commonsense reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4611–4625, Online. Association for Computational Linguistics.
Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, and Xiang Ren. 2022. Unsupervised crosstask generalization via retrieval augmentation. *ArXiv*,
abs/2204.07937.
Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1823–1840, Online. Association for Computational Linguistics.
Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, and Nigel Collier. 2021. Self-alignment pretraining for biomedical entity representations. In North American Chapter of the Association for Computational Linguistics (NAACL).
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022a. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO
2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2022b. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
ACM Comput. Surv. Just Accepted.
Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and P. Wang. 2020. K-bert: Enabling language representation with knowledge graph. In AAAI Conference on Artificial Intelligence.
Qing Lyu, Marianna Apidianaki, and Chris CallisonBurch. 2022. Towards faithful model explanation in nlp: A survey. *arXiv preprint arXiv:2209.11326*.
Aman Madaan, Niket Tandon, Peter Clark, and Yiming Yang. 2022a. Memory-assisted prompt editing to improve GPT-3 after deployment. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2833–2861, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. 2022b. Language models of code are few-shot commonsense learners. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 1384–
1403, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Gary Marcus. 2022. Experiments Testing GPT-3's Ability at Commonsense Reasoning: Results. Accessed:
2022-08-15.
Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019.
Compositional questions do not necessitate multi-hop reasoning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 4249–4257, Florence, Italy. Association for Computational Linguistics.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. 2021. Reframing Instructional Prompts to GPTk's Language. *arXiv* preprint arXiv:2109.07830.
Yasumasa Onoe, Michael Zhang, Eunsol Choi, and Greg Durrett. 2021. Creak: A dataset for commonsense reasoning over entity knowledge. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1.
Mihir Parmar, Swaroop Mishra, Mor Geva, and Chitta Baral. 2022. Don't blame the annotator: Bias already starts in the annotation instructions. *arXiv preprint* arXiv:2205.00415.
Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470–
1480, Beijing, China. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *JLMR*, 21.
Anna Rogers, Matt Gardner, and Isabelle Augenstein.
2022. Qa dataset explosion: A taxonomy of nlp resources for question answering and reading comprehension. *ACM Comput. Surv.* Just Accepted.
Corby Rosset, Chenyan Xiong, Minh Phan, Xia Song, Paul Bennett, and Saurabh Tiwary. 2020. Knowledgeaware language model pretraining. *arXiv preprint* arXiv:2007.00655.
Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Thinking like a skeptic: Defeasible inference in natural language.
In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4661–4675, Online.
Association for Computational Linguistics.
Swarnadeep Saha, Prateek Yadav, Lisa Bauer, and Mohit Bansal. 2021. ExplaGraphs: An explanation graph generation task for structured commonsense reasoning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*,
pages 7716–7740, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463–
4473, Hong Kong, China. Association for Computational Linguistics.
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer:
Language models can teach themselves to use tools.
ArXiv, abs/2302.04761.
Tao Shen, Yi Mao, Pengcheng He, Guodong Long, Adam Trischler, and Weizhu Chen. 2020. Exploiting structured knowledge in text via graph-guided representation learning. In *Empirical Methods in Natural* Language Processing (EMNLP).
Qi Shi, Yu Zhang, Qingyu Yin, and Ting Liu. 2021.
Logic-level evidence retrieval and graph-based verification network for table-based fact verification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 175–
184, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: An open multilingual graph of general knowledge. *ArXiv*, abs/1612.03975.
Shane Storks, Qiaozi Gao, and Joyce Y. Chai. 2019.
Commonsense reasoning for natural language understanding: A survey of benchmarks, resources, and approaches. *CoRR*, abs/1904.01172.
Tianxiang Sun, Yunfan Shao, Xipeng Qiu, Qipeng Guo, Yaru Hu, Xuan-Jing Huang, and Zheng Zhang. 2020.
Colake: Contextualized language and knowledge embedding. In *International Conference on Computational Linguistics (COLING)*.
Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, et al. 2021. Ernie 3.0:
Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv preprint arXiv:2107.02137.
Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions.
In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641–651, New Orleans, Louisiana. Association for Computational Linguistics.
Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2021a. CommonsenseQA 2.0: Exposing the limits of AI through gamification. In *Thirty-fifth Conference on Neural Information Processing Systems* Datasets and Benchmarks Track (Round 1).
Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, and Jonathan Berant. 2021b. Multimodal{qa}: complex question answering over text, tables and images. In *International Conference on* Learning Representations.
Yixuan Tang, Hwee Tou Ng, and Anthony Tung. 2021.
Do multi-hop question answering systems know how to answer the single-hop sub-questions? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics:
Main Volume, pages 3244–3249, Online. Association for Computational Linguistics.
James Thorne, Majid Yazdani, Marzieh Saeidi, Fabrizio Silvestri, Sebastian Riedel, and Alon Halevy.
2021. Database reasoning over text. *arXiv preprint* arXiv:2106.01074.
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. MuSiQue: Multihop questions via single-hop question composition.
Transactions of the Association for Computational Linguistics, 10:539–554.
Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, and Bowen Zhou. 2019. Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2704–2713, Florence, Italy. Association for Computational Linguistics.
Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021.
Kepler: A unified model for knowledge embedding and pre-trained language representation. *Transactions of the Association for Computational Linguistics (TACL)*.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022a.
Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou.
2022b. Chain of Thought Prompting Elicits Reasoning in Large Language Models. arXiv preprint arXiv:2201.11903.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou.
2022c. Chain of Thought Prompting Elicits Reasoning in Large Language Models. arXiv preprint arXiv:2201.11903.
Sarah Wiegreffe and Ana Marasovic. 2021. ´ Teach me to explain: A review of datasets for explainable nlp. In *Proceedings of NeurIPS*.
Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gardner, Yoav Goldberg, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question understanding benchmark. Transactions of the Association for Computational Linguistics.
Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A.
Smith, Luke Zettlemoyer, and Tao Yu. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models.
EMNLP.
Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2020. Pretrained encyclopedia:
Weakly supervised knowledge-pretrained language model. In *International Conference on Learning* Representations (ICLR).
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics.
Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, and Jure Leskovec. 2022a. Deep bidirectional language-knowledge graph pretraining. In *Neural* Information Processing Systems (NeurIPS).
Michihiro Yasunaga, Jure Leskovec, and Percy Liang.
2022b. LinkBERT: Pretraining language models with document links. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 8003–
8016, Dublin, Ireland. Association for Computational Linguistics.
Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN:
Reasoning with language models and knowledge graphs for question answering. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 535–546, Online.
Association for Computational Linguistics.
Donghan Yu, Chenguang Zhu, Yiming Yang, and Michael Zeng. 2022a. Jaket: Joint pre-training of knowledge graph and language understanding. In AAAI Conference on Artificial Intelligence.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics.
W. Yu, Chenguang Zhu, Zhihan Zhang, Shuohang Wang, Zhuosheng Zhang, Yuwei Fang, and Meng Jiang.
2022b. Retrieval augmentation for commonsense reasoning: A unified approach. *ArXiv*, abs/2210.12887.
Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars.
UAI.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al.
2022a. Opt: Open pre-trained transformer language models. *arXiv preprint arXiv:2205.01068*.
Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, and Jure Leskovec. 2022b. GreaseLM: Graph REASoning enhanced language models. In International Conference on Learning Representations.
Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. In Association for Computational Linguistics (ACL).
Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi.
2022. Teaching algorithmic reasoning via in-context learning. *arXiv preprint arXiv:2211.09066*.
Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiaodan Liang, Maosong Sun, Chenyan Xiong, and Jian Tang. 2020. Towards interpretable natural language understanding with explanations as latent variables.
Advances in Neural Information Processing Systems, 33:6803–6814.
Zhaocheng Zhu, Mikhail Galkin, Zuobai Zhang, and Jian Tang. 2022. Neural-symbolic models for logical queries on knowledge graphs. arXiv preprint arXiv:2205.10128. |
sitaram-etal-2023-everything | Everything you need to know about Multilingual {LLM}s: Towards fair, performant and reliable models for languages of the world | https://aclanthology.org/2023.acl-tutorials.3 | This tutorial will describe various aspects of scaling up language technologies to many of the world{'}s languages by describing the latest research in Massively Multilingual Language Models (MMLMs). We will cover topics such as data collection, training and fine-tuning of models, Responsible AI issues such as fairness, bias and toxicity, linguistic diversity and evaluation in the context of MMLMs, specifically focusing on issues in non-English and low-resource languages. Further, we will also talk about some of the real-world challenges in deploying these models in language communities in the field. With the performance of MMLMs improving in the zero-shot setting for many languages, it is now becoming feasible to use them for building language technologies in many languages of the world, and this tutorial will provide the computational linguistics community with unique insights from the latest research in multilingual models. | # Acl/Eacl/Emnlp 2023 Tutorial Proposal Everything You Need To Know About Multilingual Llms: Towards Fair, Performant And Reliable Models For The Languages Of The World
Sunayana Sitaram Microsoft Research India sunayana,[email protected] Monojit Choudhury Microsoft Turing, India [email protected] Barun Patra Microsoft Turing, USA
[email protected]
## Vishrav Chaudhary
Microsoft Turing, USA
[email protected]
## Kabir Ahuja
Microsoft Research India [email protected]
## Kalika Bali
Microsoft Research India [email protected]
## 1 Tutorial Content
This tutorial will describe various aspects of scaling up language technologies to many of the world's languages by presenting the latest research in Massively Multilingual Language Models (MMLMs).
We will cover topics such as data collection, training and fine-tuning of models, Responsible AI issues such as fairness, bias and toxicity, linguistic diversity and evaluation in the context of MMLMs, specifically focusing on issues in non-English and low-resource languages. Further, we will also talk about some of the real-world challenges in deploying these models in language communities in the field. With the performance of MMLMs improving in the zero-shot setting for many languages, it is now becoming feasible to use them for building language technologies in many languages of the world, and this tutorial will provide the computational linguistics community with unique insights from the latest research in multilingual models. Although past tutorials have covered some of these topics
(such as linguistic diversity, data and training of models), there has been a lot of interesting research in the recent past that the CL community will benefit from knowing about. Further, this will be the first tutorial (as per our knowledge) that will discuss issues of deployment in language communities and Responsible AI in the context of multilingual models.
This tutorial will present a broad survey covering work done by several research groups (as indicated in the references), including work done by the authors.
## Type Of The Tutorial: Cutting-Edge
Target audience and pre-requisites: The target audience for this tutorial are researchers from industry and academia who work on Large Language Models, and are interested in learning about the latest research in multilingual models to build systems for non-English languages, low-resource languages and multilingual speakers. We will not be covering the basics of LLMs, so we expect that the audience will be familiar with (at least the English versions of) models such as BERT.
## 1.1 Outline Of The Tutorial
We plan to have five talks of 30/40 minutes each, along with a 10 minute introduction, with 10 minutes for general discussion/spillover.
Introduction: We will start with a short introduction on MMLMs, describing the models that are available today and present the SOTA in model performance on various tasks across different languages.
Data and pre-training: The main goal of this section would be to outline the techniques leveraged for creating a high quality corpus for pretraining strong MMLMs. We will cover the challenges encountered in creating such a corpus as highlighted in CC100 (Conneau et al., 2020), mC4
(Xue et al., 2021), OSCAR (Ortiz Suárez et al.,
2020), ROOTS (Laurençon et al., 2022) etc., and provide an overview of the various stages of such a dataset creation pipeline. Ensuring the quality of the training corpus is highly important as it is directly correlated to the performance of MMLMs
(Kaplan et al., 2020). In addition to this, we will also discuss the pre-training strategies and possible extensions for extending the recipe to multiple languages (Conneau and Lample, 2019; Artetxe and Schwenk, 2019) describing how scaling (both on the data and model axis) can substantially help improve model performance (Conneau et al., 2020; 21 Xue et al., 2021), aiding in bridging the gap between the English performance of a multilingual and an English only model, thereby reducing the curse of Multilinguality.
Training paradigms and fine-tuning: We will describe different training paradigms (Eg: an Electra based approach (Chi et al., 2022; He et al.,
2021)) and how to leverage bitext data, discussing results of using contrastive learning approaches
(Chi et al., 2021) or extensions to Electra based approaches (Chi et al., 2022), as well as showing the benefits of going beyond English centric bitexts (Patra et al., 2022). We will also discuss some orthogonal approaches of training encoderdecoder multilingual representation models (Liu et al., 2020; Ma et al., 2021; ?), as well as complimentary techniques to build better encoder models (Eg: Adapter based approaches (Pfeiffer et al.,
2022)). We will also focus on different strategies for improving the fine-tuning performance of these models. This includes techniques encouraging models to have more consistent predictions across languages (Zheng et al., 2021), leveraging weight perturbations to avoid overfitting (Wu et al.,
2022) or techniques to reduce the sharpness of loss minima for better generalization (Foret et al., 2021; Bahri et al., 2022).
Performance evaluation and reliability: While the state-of-the-art multilingual models support around 100 languages of the world, most existing multilingual benchmarks contain evaluation data in a handful of languages (Ahuja et al., 2022b). We will discuss some potential approaches to scale up multilingual evaluation like performance prediction (Lin et al., 2019; Xia et al., 2020; Ahuja et al.,
2022c) and structure probing (Müller-Eberstein et al., 2022; Clouâtre et al., 2022). We will also focus on measuring the cost-performance trade-offs and sample efficiencies of fine-tuning MMLMs with different sources of data (translation vs manual collection)(Ahuja et al., 2022a). Further, we will cover how to measure reliability in the confidence predictions of multilingual models under a zero-shot and few-shot setup by studying their calibration (Ahuja et al., 2022d).
FATE issues: LLMs are known to pick up the biases present in the datasets that are trained on. In case of multilingual LLMs, apart from bias and fairness issues at group and individual level, one also need to address the issue of disparity of zero-shot transfer accuracies across languages and varieties
(Choudhury and Deshpande, 2021; Lauscher et al.,
2020). Furthermore, there is little work done on the interaction among the biases in corpora from different languages, influence of grammatical gender (Cao and Daumé, 2021) and other syntactic and semantic factors on measurement and mitigation of biases, and socio-cultural aspects of biases (Sambasivan et al., 2021). In this section of the tutorial, we will survey the work done so far in non-English FATE issues and present challenges that remain to be addressed.
Deploying to language communities: LLMs today are trained using billions of parameters, making them infeasible to be used in low-memory footprint devices. Language communities (particularly those that speak under-resourced languages) that may benefit the most from Speech and NLP technologies may not have good enough connectivity to be able to use models hosted on the cloud.
This necessitates the development or distillation of lightweight models for low-resource languages, and in this section, we will present research in this direction (Diddee et al., 2022). We will study the state of current LT to serve communities speaking different languages for critical situations such as healthcare bots (Mondal et al., 2022). Further, there are many social and cultural factors to be taken into account while deploying MMLMs to language communities, which we will also discuss in this section.
## 1.2 Diversity Considerations
The topic of the tutorial inherently encourages linguistic diversity. In terms of gender diversity, two of the tutorial presenters are female, while four are male. In this tutorial, we will cover issues related to Responsible AI (fairness, toxicity) and deploying to under-resourced language communities which will improve diversity considerations while building LLMs. The instructors are a mix of senior, mid-career and junior researchers.
## 1.3 Reading List
Please check the references section for the reading list.
## 2 Instructor Bios
Sunayana Sitaram is a Senior Researcher at Microsoft Research India, where she works on multilingual speech and NLP. Her current research interests include training and evaluation of Massively Multilingual Language Models and Responsible AI for NLP. Prior to coming to MSRI as a Post Doc, Sunayana completed her MS and PhD
at the Language Technologies Institute, Carnegie Mellon University in 2015. Sunayana's research has been published in top NLP and Speech conferences including ACL, NAACL, EMNLP, Interspeech, ICASSP. She has organized special sessions and workshops on under-resourced languages, code-switching, multilingual evaluation and speech for social good. She has also led the creation of several benchmarks and datasets in code-switching, ASR, NLI and TTS that have been used by research groups all over the world.
Monojit Choudhury is a Principal Applied Scientist at Microsoft Turing, prior to which he was a Principal Researcher at Microsoft Research India. He is also a Professor of Practice at Plaksha University, and had held adjunct faculty positions at Ashoka University, IIIT Hyderabad and IIT
Kharagpur. Over the past 15 years, Monojit has worked on several impactful projects on processing of code-mixed text, evaluation and linguistic fairness of large language models, and social impact through participatory design of technology for under-resourced languages like Gondi, Mundari, Idu Mishmi and Swahili. Monojit has served as Senior Area Chair and Area chair in leading NLP and AI conferences including EMNLP, ACL, NAACL,
IJCNLP and AAAI. He has organized several successful workshops in *ACL conferences (SUMEval 2022, CALCS series, TextGraph series, etc.) and has delivered a tutorial on Code-mixed text processing at EMNLP 2019. He is the general chair of the Panini Linguistics Olympiad and the founding co-chair of Asia Pacific Linguistics Olympiad
- programs to introduce bright young students to linguistics and computational linguistics through puzzles. Dr. Choudhury holds PhD and B.Tech degrees in Computer Science and Engineering from IIT Kharagpur.
Vishrav Chaudhary is a Principal Researcher at Microsoft Turing where he works on scaling and building efficient Multilingual and Multimodal representation and generation models. Prior to Microsoft, Vishrav was a Lead Researcher at FAIR
and focused on several aspects of Machine Translation, Quality Estimation and Cross-lingual understanding. Over the past 10 years, Vishrav's research work has been published in several leading NLP and AI conferences and journals including ACL, EMNLP, NAACL, EACL, AACL, TACL,
JMLR and AMTA. He has also organized several workshops successfully including SUMEval 2022, AmericasNLP 2021, WMT 2021 etc. He has also served as an Area Chair for EMNLP 2022. Vishrav has also led creation of benchmarks and datasets targeting 100+ languages which have been used to train state-of-the-art Cross Lingual Representation and Machine Translation models.
Barun Patra is an Applied Scientist at Microsoft Turing. His research interest revolves around building better foundational models that can help support numerous NLP tasks across different languages. Barun's research work focuses on improving the quality and efficiency of training these large multilingual foundational models, helping achieve state-of-the-art performance on crosslingual NLP tasks.
Kabir Ahuja is a Research Fellow at Microsoft Research India, where he works on building linguistically fair multilingual models covering different aspects around their performance, calibration, evaluation, interpretation, and data collection. He is also interested in the analysis and interpretability of the computation mechanisms utilized by neural sequence models for solving different tasks.
Kalika Bali is a Principal Researcher at Microsoft Research India working in the areas of Machine Learning, Natural Language Systems and Applications, as well as Technology for Emerging Markets. Her research interests lie broadly in the area of Speech and Language Technology especially in the use of linguistic models for building technology that offers a more natural HumanComputer as well as Computer-Mediated interactions.
## 3 Other Estimate Of Audience Size: 50
Venues: We would prefer ACL 2023 to be the venue for the tutorial, but EMNLP and EACL are also acceptable. We do not forsee any special requirements for technical equipment.
## 3.1 Ethics Statement
This tutorial will present current research on Multilingual model training, evaluation, Responsible AI issues and deploying models in the field. Although we aim to promote linguistic diversity by discussing issues pertaining to multilingual models trained on around 100 languages, many languages of the world are not supported by these models.
Further, the techniques that we will discuss mainly apply to written languages, while unwritten languages will be excluded from the tutorial.
## References
Kabir Ahuja, Monojit Choudhury, and Sandipan Dandapat. 2022a. On the economics of multilingual few-shot learning: Modeling the cost-performance trade-offs of machine translated and manual data. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1369–1384, Seattle, United States. Association for Computational Linguistics.
Kabir Ahuja, Sandipan Dandapat, Sunayana Sitaram, and Monojit Choudhury. 2022b. Beyond static models and test sets: Benchmarking the potential of pretrained models across tasks and languages. In *Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP*, pages 64–74, Dublin, Ireland. Association for Computational Linguistics.
Kabir Ahuja, Shanu Kumar, Sandipan Dandapat, and Monojit Choudhury. 2022c. Multi task learning for zero shot performance prediction of multilingual models. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 5454–5467, Dublin, Ireland. Association for Computational Linguistics.
Kabir Ahuja, Sunayana Sitaram, Sandipan Dandapat, and Monojit Choudhury. 2022d. On the calibration of massively multilingual language models.
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. *Transactions* of the Association for Computational Linguistics, 7:597–610.
Dara Bahri, Hossein Mobahi, and Yi Tay. 2022.
Sharpness-aware minimization improves language model generalization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7360–
7371, Dublin, Ireland. Association for Computational Linguistics.
Yang Trista Cao and III Daumé, Hal. 2021. Toward Gender-Inclusive Coreference Resolution: An Analysis of Gender and Bias Throughout the Machine Learning Lifecycle*. *Computational Linguistics*,
47(3):615–661.
Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language Technologies, pages 3576–3588, Online. Association for Computational Linguistics.
Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Bo Zheng, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, and Furu Wei. 2022.
XLM-E: Cross-lingual language model pre-training via ELECTRA. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6170–6182, Dublin, Ireland. Association for Computational Linguistics.
Monojit Choudhury and Amit Deshpande. 2021. How linguistically fair are multilingual pre-trained language models? *Proceedings of the AAAI Conference* on Artificial Intelligence, 35(14):12710–12718.
Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar. 2022. Detecting languages unintelligible to multilingual models through local structure probes.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32.
Curran Associates, Inc.
Harshita Diddee, Sandipan Dandapat, Monojit Choudhury, Tanuja Ganu, and Kalika Bali. 2022. Too brittle to touch: Comparing the stability of quantization and distillation towards developing lightweight lowresource mt models.
Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. 2021. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations.
Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021.
Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. *arXiv preprint arXiv:2111.09543*.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B.
Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. *CoRR*,
abs/2001.08361.
Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg
Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gérard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Romero Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Vu Minh Chien, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Ifeoluwa Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Luccioni, and Yacine Jernite.
2022. The bigscience ROOTS corpus: A 1.6TB
composite multilingual dataset. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and ´
Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483–4499, Online. Association for Computational Linguistics.
Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019.
Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125–3135, Florence, Italy. Association for Computational Linguistics.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, and Furu Wei. 2021. Deltalm: Encoder-decoder pre-training for language generation and translation by augmenting pretrained multilingual encoders. *arXiv preprint* arXiv:2106.13736.
Ishani Mondal, Kabir Ahuja, Mohit Jain, Jacki O'Neill, Kalika Bali, and Monojit Choudhury. 2022. Global readiness of language technology for healthcare:
What would it take to combat the next pandemic?
In *Proceedings of the 29th International Conference* on Computational Linguistics, pages 4320–4335, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Max Müller-Eberstein, Rob van der Goot, and Barbara Plank. 2022. Sort by structure: Language model ranking as dependency probing. In *Proceedings of*
the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1296–1307, Seattle, United States. Association for Computational Linguistics.
Pedro Javier Ortiz Suárez, Laurent Romary, and Benoît Sagot. 2020. A monolingual approach to contextualized word embeddings for mid-resource languages.
In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 1703–
1714, Online. Association for Computational Linguistics.
Barun Patra, Saksham Singhal, Shaohan Huang, Zewen Chi, Li Dong, Furu Wei, Vishrav Chaudhary, and Xia Song. 2022. Beyond english-centric bitexts for better multilingual language representation learning. *arXiv* preprint arXiv:2210.14867.
Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. 2022.
Lifting the curse of multilinguality by pre-training modular transformers. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3479–3495, Seattle, United States. Association for Computational Linguistics.
Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021.
Re-imagining algorithmic fairness in india and beyond. In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*,
FAccT '21, page 315–328, New York, NY, USA. Association for Computing Machinery.
Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2022. NoisyTune: A little noise can help you finetune pretrained language models better. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 680–685, Dublin, Ireland. Association for Computational Linguistics.
Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. 2020. Predicting performance for natural language processing tasks.
In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8625–
8646, Online. Association for Computational Linguistics.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics.
Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, and Furu Wei. 2021. Consistency regularization for cross-lingual fine-tuning. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 3403–3417, Online.
Association for Computational Linguistics. |
amini-etal-2023-generating | Generating Text from Language Models | https://aclanthology.org/2023.acl-tutorials.4 | An increasingly large percentage of natural language processing (NLP) tasks center around the generation of text from probabilistic language models. Despite this trend, techniques for improving or specifying preferences in these generated texts rely mostly on intuition-based heuristics. Further, there lacks a unified presentation of their motivations, practical implementation, successes and pitfalls. Practitioners must, therefore, choose somewhat blindly between generation algorithms{---}like top-p sampling or beam search{---}which can lead to wildly different results. At the same time, language generation research continues to criticize and improve the standard toolboxes, further adding entropy to the state of the field. In this tutorial, we will provide a centralized and cohesive discussion of critical considerations when choosing how to generate from a language model. We will cover a wide range of empirically-observed problems (like degradation, hallucination, repetition) and their corresponding proposed algorithmic solutions from recent research (like top-p sampling and its successors). We will then discuss a subset of these algorithms under a unified light; most stochastic generation strategies can be framed as locally adapting the probabilities of a model to avoid failure cases. Finally, we will then cover methods in controlled generation, that go beyond just ensuring coherence to ensure text exhibits specific desired properties. We aim for NLP practitioners and researchers to leave our tutorial with a unified framework which they can use to evaluate and contribute to the latest research in language generation. | # Generating Text From Language Models
Afra Amini1 Ryan Cotterell1 **John Hewitt**2 Luca Malagutti1 Clara Meister1 **Tiago Pimentel**3 1ETH Zürich 2Stanford University 3University of Cambridge [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
## Abstract
An increasingly large percentage of natural language processing (NLP) tasks center around the generation of text from probabilistic language models. Despite this trend, there lacks a unified framing of the techniques for generating from language models, both in terms of methods that improve text quality and methods that allow more fine-grained control of generation.
Without this framing, practitioners must either be experts in the generation field or choose somewhat blindly between a large range of algorithms that can lead to wildly different results depending on the specific use-case, e.g., top-p sampling and beam search. In this tutorial, we will provide a centralized and cohesive discussion of critical considerations when choosing how to generate from a language model. We will first discuss the formal definition of a probabilistic language generator and taxonomize a wide range of empirically-observed problems with systems using these models, like degradation, hallucination, and repetition. We will then discuss their corresponding proposed algorithmic solutions under a unified light; specifically as *locally adapting* the probabilities of a model to avoid failure cases. Finally, we will then cover methods in *controlled* generation, that go beyond just ensuring coherence to ensure text exhibits specific desired properties. We aim for NLP practitioners and researchers to leave our tutorial with a unified framework which they can use to evaluate and contribute to the latest research in language generation.
## 1 Introduction And Motivation
With their widespread public availability, large pretrained language models have become a core part of many natural language processing (NLP) pipelines.
This trend is particularly evident in language generation tasks, where prompt engineering and controlled generation techniques have shown that these models can essentially be used "out-of-the-box" for various language generation needs. Yet, as has been observed repeatedly, how one chooses to generate text from these models can lead to vastly different results; make the wrong choice and a language model can fall into repetitive loops (Welleck et al., 2020), generate gibberish (Holtzman et al., 2020),
or spew out random (and possibly falsifiable) declarations (Maynez et al., 2020). In the effort to circumnavigate these issues, one can make use of a variety of relatively straightforward methods: (i) sampling adapters, simple modifications to token-level distributions that help prevent the generation of incoherent text; (ii) controlled generation methods, techniques that guide these models to output strings with a set of desired attributes. While employing these methods often does not require domain expertise, many people do not have proper knowledge of the tools available—and much less how and when to apply them. Hence, without years of experience in this subfield, both NLP researchers and practitioners may have difficulty using pretrained language models for text generation, as they will likely encounter the problematic behaviors mentioned above.
In this **cutting-edge** tutorial, we aim to offer a comprehensive introduction to techniques for generating strings from language models, discussing both how to sample adeptly from and explicitly control them. This tutorial will be divided in four parts. First, we will present background knowledge on language modeling, discussing both its mathematical formulation, the empirically-observed successes and shortcomings of modern models when used to generate language, and the difficulty in evaluating these successes and failures. Second, we will give a brief overview of the basics of language generation, framing generation as the combination of a choice of a decoding algorithm and objective.
The final two parts of this tutorial, which focus on alleviating the previously discussed issues with using language models out-of-the-box for generation, will be discussed within this framing: we present heuristic modifications to the objective that have empirically-proven themselves effective at improving generation quality as well as new decoding algorithms that—when combined with the right objective—can be used to enforce constraints on the text output by models. We believe this will equip the NLP community with the knowledge of how to better employ these models for their downstream use-cases, thus making them more broadly accessible.
## 2 Target Audience
Our tutorial is targeted at members of the NLP
community who wish to make use of language models for various language generation tasks. This includes researchers, interested in e.g., data augmentation techniques, as well as practitioners wishing to make use of pretrained language models in their language generation pipelines. We expect that participants are comfortable with probabilistic formulations of NLP tasks, as well as the structure and formulation of standard autoregressive models e.g.,
transformers. **While we do not require any readings, we recommend reviewing (in no particular**
order) the works cited in this proposal.
## 3 Outline 3.1 Part 1: Background
Modern natural language processing tends to proceed by (1) framing a task in probabilistic terms, (2)
estimating a model to imitate the task's generative processes (typically using finite training datasets as a proxy), and then (3) using this model as a tool to accomplish the task. This is how the task of language modeling is often approached. More precisely, practitioners take a corpus D = {y
(n)}
N
n=1an N-sized set of strings consisting of tokens y from some vocabulary V—and treat it as a set of independently and identically distributed samples from a distribution p(y). We will use p to denote the *true* language modeling distribution, i.e., the distribution defined by the data-generating process, from which we drew our samples. In practice, the vast majority of these models, which we denote as pθ, are trained to minimize the cross-entropy with the empirical distribution defined by our finite set of samples D. In this tutorial, we'll focus largely on autoregressive models of p, meaning that we decompose the probability of a string as p(y) = QT
t=1 p(yt| y<t) and build a model of the conditional distribution p(yt| y<t) instead.
Successes and known failures. It is hard to overstate the improvements in modeling performance that have occurred in the last five years, as measured simply in terms of perplexity on held-out data. These models are used ubiquitously as the base for fine-tuning on downstream tasks, leading to SOTA performance for myriad tasks. Indeed, the recent ChatGPT is one such instance of a large language model fine-tuned to generate astoundingly fluent and realistic text.
However, when used out-of-the-box for language generation (i.e., without any fine-tuning),
these models exhibit a number of failure modes.
Among others:
- **Low-quality, low-probability words.** Due to the use of the cross-entropy objective, language models place non-zero probability on poor continuations.
- **Degradation of long texts.** Possibly as a result of the above, generating longer texts can present a greater challenge, as errors tend to propagate and accumulate.
- **Repetition when searching for the mode.** In cases where *highly probable* text under the training set is desired, language models' probability estimates tend to fail and overestimate the probability of highly repetitive text.
- **Inability to guide or constrain generation.**
There is no builtin way to direct or shift generation towards a particular concept, meaning one may have to sample indefinitely in order to get a text with the desired attributes.
Further, there is the added difficulty of measuring the quality of generations in many settings: automatic metrics such as BLEU or ROUGE require references and reference-free metrics still do not have direct mechanisms for measuring attributes of text that may be of interest, e.g., faithfulness to a topic. A range of language generation techniques are used both to avoid known failure modes, to coax more desirable properties out of language models, and to direct generation. These methods will be the focus of our tutorial.
## 3.2 Part 2: Language Generation
Given a language model pθ(·|y<t), how does ones generate text from it? In this part of the tutorial, we give an overview of **decoding strategies**: techniques for generating from probability distributions over strings. Specifically, we will frame all decoding strategies as consisting of two choice points: a scoring function (or objective) and a decoding algorithm. For example, standard ancestral sampling can be recovered when standard log-probability is used as the scoring function and multinomial sampling is used as the algorithm. While this framing may seem excessive at first, it emphasizes the ability to combine the components of well-known decoding strategies. This in turn allows us to build decoding strategies—whose efficacy depend on the underlying model and the desired outcome—with specific goals in mind. For example, one could use the truncation scoring function specified by typical sampling in combination with the beam search algorithm if the user has reason to believe this would help them achieve their goals.
We will then motivate the usage of different scoring functions and decoding algorithms, providing both intuition and formal reasons as to why we might want either in different settings. A main focus of this discussion will be the scale of "openendedness" on which a generation task falls. For example, story generation can be very **open-ended**
when there are no specific desired directions for the story to follow. On the other hand, machine translation is quite **semantically-constrained**. In this tutorial, we will discuss open-endedness as a scale well-described by the **entropy** of the true distribution a task specifies, an attribute which—without explicitly added modeling biases—we expect to be reflected in models of this distribution.
These attributes of a generation task motivate different quantitative approaches during decoding.
In machine translation, we often look for high probability strings, for which we can rely on deterministic decoding algorithms that "search" over the support of the distribution pθ(· | y<t) for this correct answer. On the other hand, if generating from a distribution over web text documents, the notion of the "most likely" web text document is unintuitive, to say the least. This motivates the use of stochastic generation strategies, which naturally add diversity to the generated output. Yet in both of these cases, several issues arise from simply using p(y) as the scoring function, such as the inability to steer generation in a desired direction (if not encoded directly in p(y) itself) or the possibility to sample from low probability regions of p(y). In the next section, we dive into different methods to mitigate these issues.
ing function that have been proposed in the effort to the mitigate the generation failures discussed in part 1 (Fan et al., 2018; Holtzman et al., 2020; Basu et al., 2021; Meister et al., 2022; Hewitt et al.,
2022). For example, one issue that has received large focus is the constraint that language models must assign nonzero probability to all token in the vocabulary. Even if a model assigns inappropriate tokens very low probability, there is still the chance of sampling them when using stochastic decoding algorithms. This can lead to undesirable outputs, as a single incoherent token can render a natural language string virtually incomprehensible
(Fan et al., 2018; Holtzman et al., 2020). Under the assumption that our training data consisted of coherent text, the model will subsequently not be able to predict appropriate continuations for such a text as it was not exposed to text of this nature during training. While intuitively we might expect this issue to only occur with low probability, a concrete example proves otherwise.1 Methods such as nucleus and top-k sampling have proposed simple modifications to the scoring function p(· | y<t) to exclude undesirable tokens from the candidate pool. These types of transformations are widely-employed when sampling from probabilistic language generators: they are quick to implement, efficient in practice, and surprisingly effective. Indeed, nucleus sampling is often used as a baseline in various language generation tasks
(Welleck et al., 2020; Pillutla et al., 2021; Basu et al., 2021).
Here we will offer a formal treatment of these transformations; we present a general framework for what we call **sampling adapters**, the class of functions g : R|V| → R|V| that adapts each conditional distribution pθ(· | y<t) in a locally normalized language model to a new distribution. We will show results from prior works comparing these methods, describing the problems that they mitigate (such as sampling incoherent tokens) as well as the problems that they introduce (such as repetitive generations). Finally, we will discuss possible interpretations of the effectiveness of these methods, in order to provide intuition for why they lead to better language generation.
## 3.3 Part 3: Sampling Adapters
In this part of the tutorial, we will discuss simple modifications to the standard log-probability scor-
## 3.4 Part 4: Controlled Generation
Generated samples from language models often contain toxic or non-factual content (Gehman et al.,
2020; Maynez et al., 2020). Further, they also often go off-topic, even after applying the sampling adapters discussed in the previous section (Yang and Klein, 2021). To ensure that the generated samples satisfy a set of desired properties—e.g. being non-toxic or talking about a certain topicwe need methods to impose controls during the sampling process. The question we will discuss in this part of the tutorial is how can we sample from a pretrained language model pθ, while ensuring that samples satisfy a specific control c? This can be formalized as turning our scoring function into a different distribution pθ(y | c). We look methods for building pθ(y | c) using an arbitrary language model and the decoding algorithms that can used with this distribution under different circumstances.
Given a control c, our goal is to sample a token yt from the distribution p(· | y<t, c). Following Bayes' rule, this distribution is proportional to pθ(· | y<t) p(c | y≤t), where we use pθ to denote an arbitrary language model. In other words, we can view our problem as reweighting the score of a candidate yt under the language model pθ according to the probability that y≤t satisfies the control target: p(c | y≤t) (Yang and Klein, 2021).
This control target can be estimated with a supervised classifier parameterized by ϕ: pϕ(c | y≤t)
(Ghazvininejad et al., 2017; Holtzman et al., 2018).
Building such a classifier, however, is arguably an easier problem than building the entire distribution over natural language strings, if due to the smaller size of the support alone. Once we obtain such estimates, we can make use of an arbitrary language model pθ and standard autoregressive decoding algorithm for controlled generation.
While autoregressive methods have proven effective for controlling the topic or the sentiment of samples, they fail for more complex controls such as toxicity or syntax. Particularly, for more complex controls, estimating p(c | y≤t) becomes challenging. If at any point this probability distribution diverges from the true value, the error will propagate to the next steps due to structure of most of these models. To address this issue, other controlled generation methods propose sampling the whole sequence y at once, using Markov-Chain methods. Specifically, these methods propose a decoding algorithm for building Markov-Chains based that have the stationary distribution p(y | c).
Given that the sampling space is high dimensional, Hamiltonian Monte Carlo (HMC) algorithms, such as Langevin Dynamics, have been shown to be effective for drawing samples from those MarkovChains (Qin et al., 2022; Kumar et al., 2022).
## 4 Presenters
- **Afra Amini** is a PhD student at ETH Zürich in the ETH AI Center. Her current foci include language generation and parsing.
- **Ryan Cotterell** is an assistant professor at ETH Zürich in the Institute for Machine Learning. His research focuses on a wide range of topics, including informationtheoretic linguistics, parsing, computational typology and morphology, and bias and fairness in NLP systems.
- **John Hewitt** Is a PhD student at Stanford University. His research tackles basic problems in learning models from broad distributions over language, characterizing and understanding those models, and building smaller, simpler models.
- **Clara Meister** is a PhD student at ETH
Zürich in the Institute for Machine Learning and a Google PhD Fellow. Her current foci include language generation, pyscholinguistics, and the general application of statistical methods to natural language processing.
- **Tiago Pimentel** is a PhD student at the University of Cambridge and a Facebook Fellow. His research focuses on information theory, and its applications to the analysis of pre-trained language models and natural languages.
## Diversity Considerations
As our tutorial focuses on language generation, we will cover issues related to modeling and generating strings in languages which are typologically different from English. Further, this tutorial was developed by a group of researchers from three universities (Stanford, ETHZ and Cambridge), who are originally from 3 continents (Asia, North America, and South America). Lastly, it will discuss work produced by authors spanning many backgrounds, both in industry—where institutions have the resources to train these large language models and make them publicly available—and academiawhich has given a large focus to making efficient use of pretrained models during generation.
## References
Sourya Basu, Govardana Sachitanandam Ramachandran, Nitish Shirish Keskar, and Lav R. Varshney.
2021. Mirostat: A perplexity-controlled neural text decoding algorithm. In *Proceedings of the 9th International Conference on Learning Representations*.
Angela Fan, Mike Lewis, and Yann Dauphin. 2018.
Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 889–898, Melbourne, Australia. Association for Computational Linguistics.
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association for* Computational Linguistics, pages 3356–3369, Online. Association for Computational Linguistics.
Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. 2017. Hafez: an interactive poetry generation system. In *Proceedings of ACL 2017,*
System Demonstrations, pages 43–48, Vancouver, Canada. Association for Computational Linguistics.
John Hewitt, Christopher D. Manning, and Percy Liang.
2022. Truncation sampling as language model desmoothing. In *Findings of the Conference on Empirical Methods in Natural Language Processing*.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In *Proceedings of the 8th International* Conference on Learning Representations.
Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1638–1649, Melbourne, Australia. Association for Computational Linguistics.
Sachin Kumar, Biswajit Paria, and Yulia Tsvetkov. 2022.
Constrained sampling from language models via langevin dynamics in embedding spaces. *CoRR*,
abs/2205.12558.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics.
Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2022. Locally typical sampling. *Transactions of the Association for Computational Linguistics*.
Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. MAUVE: Measuring the gap between neural text and human text using divergence frontiers. In *Advances in Neural Information Processing Systems*, volume 34, pages 4816–4828. Curran Associates, Inc.
Lianhui Qin, Sean Welleck, Daniel Khashabi, and Yejin Choi. 2022. COLD decoding: Energy-based constrained text generation with langevin dynamics. In Advances in Neural Information Processing Systems.
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In Proceedings of the 8th International Conference on Learning Representations.
Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511–3535, Online. Association for Computational Linguistics. |
yin-etal-2023-indirectly | Indirectly Supervised Natural Language Processing | https://aclanthology.org/2023.acl-tutorials.5 | This tutorial targets researchers and practitioners who are interested in ML technologies for NLP from indirect supervision. In particular, we will present a diverse thread of indirect supervision studies that try to answer the following questions: (i) when and how can we provide supervision for a target task T, if all we have is data that corresponds to a {``}related{''} task T′? (ii) humans do not use exhaustive supervision; they rely on occasional feedback, and learn from incidental signals from various sources; how can we effectively incorporate such supervision in machine learning? (iii) how can we leverage multi-modal supervision to help NLP? To the end, we will discuss several lines of research that address those challenges, including (i) indirect supervision from T ′ that handles T with outputs spanning from a moderate size to an open space, (ii) the use of sparsely occurring and incidental signals, such as partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotations{---}all having statistical associations with the task, (iii) principled ways to measure and understand why these incidental signals can contribute to our target tasks, and (iv) indirect supervision from vision-language signals. We will conclude the tutorial by outlining directions for further investigation. | # Indirectly Supervised Natural Language Processing
Wenpeng Yin†, Muhao Chen‡, Ben Zhou⋄, Qiang Ning⋆, Kai-Wei Chang♯**, Dan Roth**⋄⋆
†Penn State; ‡USC; ⋄UPenn; ⋆AWS AI Labs; ♯UCLA
[email protected]; [email protected]
{xyzhou,danroth}@seas.upenn.edu [email protected]; [email protected]
## Abstract
This tutorial targets researchers and practitioners who are interested in ML technologies for NLP from indirect supervision. In particular, we will present a diverse thread of indirect supervision studies that try to answer the following questions: (i) when and how can we provide supervision for a target task T, if all we have is data that corresponds to a "related" task T′?
(ii) humans do not use *exhaustive* supervision; they rely on occasional feedback, and learn from incidental signals from various sources; how can we effectively incorporate such supervision in machine learning? (iii) how can we leverage multi-modal supervision to help NLP?
To the end, we will discuss several lines of research that address those challenges, including
(i) indirect supervision from T′that handles T
with outputs spanning from a moderate size to an open space, (ii) the use of sparsely occurring and incidental signals, such as partial labels, noisy labels, knowledge-based constraints, and cross-domain or cross-task annotationsall having statistical associations with the task,
(iii) principled ways to measure and understand why these incidental signals can contribute to our target tasks, and (iv) indirect supervision from vision-language signals. We will conclude the tutorial by outlining directions for further investigation.
## 1 Introduction
Conventional approaches to NLP rely on taskspecific labeled examples of a large volume. This does not apply to scenarios where tasks may be too complicated or costly to annotate, or the system is required to handle a new task immediately.
Many people increasingly perceive that pretrained language models (PLMs) use self-supervision, and therefore there is no need for supervision anymore.
While this is probably true for Encoder-only models (e.g., BERT (Devlin et al., 2019)), this does not hold for Decoder models, where people nowadays use vast amounts of supervision and reinforcement learning signals. Therefore, it is still desirable to gather *supervision that has already existed in related tasks or is pretty cheap*, which is termed "indirect supervision" in this tutorial.
Recently, there have been increasing works that study indirect supervision for a wide range of NLP
tasks. For example, Yin et al. (2019) and Lu et al.
(2022a) respectively leveraged the rich annotation of a source task (natural language inference or summarization) to address the poorly-annotated target tasks. To make better use of the natural texts, some literature (Roth, 2017; Chen et al., 2021; He et al.,
2021) proposed to explore incidental supervision, e.g., phonetic similarity and similar temporal distribution for named entity transliteration, to help downstream tasks. That sort of incidental supervision is often weak signals that exist in the data and the environment independently of the tasks at hand, and is hard to be encoded by PLMs. Furthermore, when accessing supervision from pure text is challenging, researchers turned to other modalities for indirect supervision (Li et al., 2022b).
This tutorial presents a comprehensive introduction of those lines of frontier research on indirectly supervised NLP. In particular, it tries to answer the following questions: (i) Which source task is easier to be adapted to solve various target tasks and any constraints there? (ii) What are the limitations of pretrained language models in discovering supervision from natural texts, and how can we alleviate them with incidental signals? (iii) Are there any theoretical measures that can indicate the benefits of the incidental signals to a given downstream task? (iv) How to mitigate the gap between different modalities if we want to utilize image/video knowledge to guide NLP? By addressing those critical questions, we believe it is necessary to present a timely tutorial to comprehensively summarize the new frontiers in indirectly supervised NLP research and point out the emerging challenges that deserve further investigation. Participants will learn about 32 recent trends and emerging challenges in this topic, representative tools and learning resources to obtain ready-to-use models, and how related technologies benefit end-user NLP applications.
## 2 Outline Of Tutorial Content
This **half-day** tutorial presents a systematic overview of recent advancements in indirect supervision methods for NLP. The detailed contents are outlined below.
## 2.1 Background And Motivation [15Min]
We will begin motivating this topic with a selection of real-world applications and emerging challenges of NLP with limited end-task annotations.
## 2.2 Indirect Supervision From Nlu Tasks [30Min]
We start with indirect supervision from a source task that is efficient to handle a moderate size of outputs in the target task. For example, in most zero/few-shot text classification tasks, such as topic classification, entity typing, relation identification, etc., the main obstacle is letting systems understand the semantics of labels. In contrast to conventional supervised classifiers, which converted labels into indices, we introduce NLI (natural language inference)-based approaches that take into account the input semantics as well as label semantics. In specific, we will introduce typical work that treats different topics (Yin et al., 2019), stances
(Xu et al., 2022), entity types (Li et al., 2022a; Du et al., 2023), event types (Lyu et al., 2021), entity relations (Xia et al., 2021; Sainz et al., 2021, 2022), and question-answer (Yin et al., 2021) as hypotheses and the inputs as premises, then makes use of pretrained NLI system to handle a variety of classification tasks with a given set of labels.
In addition, we will present extractive question answering (Ex-QA) based supervision that is utilized for downstream tasks (McCann et al., 2018; Keskar et al., 2019; He et al., 2020; Wu et al., 2020; Li et al., 2020). The advantage of Ex-QA based indirect supervision over the NLI-based one lies in that Ex-QA can handle sequence tagging and span detection tasks while NLI-based approaches primarily work for classification.
## 2.3 Indirect Supervision From Nlg And Ir [30Min]
We will introduce methodologies that acquire indirect supervision signals from natural language generation (NLG) and information retrieval tasks to solve more low-resource discriminative tasks. Formulating discriminative tasks as generation tasks can be an efficient way to guide PLMs to leverage the semantics of decision labels (Huang et al.,
2021; Lu et al., 2022a; Hsu et al., 2022; Yuan et al.,
2022). A method of this kind typically leads to a sequence-to-sequence generation process that emits a verbalization of the decision label given the input sequence (Zeng et al., 2018, 2020; Ye et al., 2021; Cao and Ananiadou, 2021). Instead of predicting classification logits, these models represent the class as a concise structure and employ controlled decoding for the generation. In this way, the model allows cross-task signal transfer from high-resource NLG tasks, and captures a semantically rich representation of the discriminative task's original decision space. A representative example is SuRE (Lu et al., 2022a), which reformulates the more expensive relation extraction task into summarization with constrained decoding, leading to more precise and label-efficient sentence-level relation extraction. We will also introduce methods that reformulate as a retrieval task (Zhang et al.,
2021a,b; Huang et al., 2022; Chen et al., 2020).
This technique allows using the inductive bias of a dense retrieval model to handle a discriminative task with a large decision space, such as entity linking (Zhang et al., 2021a) and fine-grained typing
(Huang et al., 2022).
## 2.4 Incidental Supervision From Natural Text [30Min]
Both the indirect supervision introduced in the above sections (§2.2-§2.3) relies on transferred supervision signals from some source task annotations. Natural texts are structured to contain a large number of incidental signals that can be subsequently utilized by downstream tasks with minimal human effort. Despite the fact that the community has found that PLMs are capable of providing incidental supervision signals for a wide range of tasks, they do not provide controls over what kinds of knowledge exist. To the end, we introduce incidental relations found in natural text spans. For example, certain keywords and linguistic patterns can provide incidental supervision to downstream tasks such as relation extraction (Zhou et al., 2022b), temporal reasoning (Zhou et al., 2020, 2021), and affordance reasoning (Qasemi et al., 2022). Moreover, textual snippets can often be viewed in a structure by their global information, such as publication dates, titles, and authors, which establish relations that helps with complex tasks (Zhou et al., 2022a).
Designing and collecting such linguistic patterns often require human knowledge; this process of injecting human knowledge provides signals that PLMs cannot find and produces diverse automatic supervision for many tasks.
## 2.5 Theoretical Analysis Of Incidental Supervision [30Min]
§2.4 presents several real-world applications of incidental signals. In this part, we pose the challenge to define a principled way to measure the benefits of these signals to a given downstream task, and the challenge to further understand why and how these signals can help reduce the complexity of the learning problem in theory. We will introduce existing efforts along these two lines, mainly He et al.
(2021) and Wang et al. (2020). Specifically, we introduce (i) a unified theoretical framework (Wang et al., 2020) for multi-class classification when the supervision is provided by a variable that contains nonzero mutual information with the gold label; the nature of this problem is determined by the transition probability from the gold labels to the indirect supervision variables (van Rooyen and Williamson, 2018) and the learner's prior knowledge about the transition; and (ii) a unified PAC-Bayesian motivated informativeness measure, PABI (He et al.,
2021), that characterizes the uncertainty reduction provided by incidental supervision signals. We share studies in Qasemi et al. (2022) and Ning et al.
(2019) that demonstrate PABI's effectiveness by quantifying the value added by various types of incidental signals to sequence tagging tasks. Finally, we will highlight the gaps that are yet to be closed in these lines, and point out future research directions on this topic.
## 2.6 Indirect Supervision From Multi-Modalities [30Min]
In the previous section, we discuss how to leverage indirect supervision from text data. Next, we will extend our discussion to introduce methods that leverage indirect supervision in multimodal data for cross-modality tasks. We will take visionlanguage tasks, such as answering complex highlevel question about images (Zellers et al., 2019),
as an example. We will first introduce methods that learn to align visual tokens and text tokens based on image caption data (Tan and Bansal, 2019; Li et al., 2019; Tan and Bansal, 2020). The crossmodality knowledge learned from indirect supervision can be used to solve various text, image, and mixed modality tasks. We will then introduce approaches that use only indirect supervision from object recognition models to learn text-image alignment from unaligned language and vision Data (Li et al., 2021). Finally, we will discuss methods for learning to ground elements of language to image regions without explicit supervision (Li et al.,
2022b; Zhang et al., 2022).
## 2.7 Future Research Directions [15Min]
Indirect supervision is the key to coping with a variety of NLP tasks that are not equipped with enough labeled data. We will conclude the tutorial by presenting further challenges and potential research topics, such as (i) explaining the model predictions when the supervision is indirect (Rajani et al., 2020; Lu et al., 2022b), (ii) injecting incidental signals that express human knowledge but cannot be learned by pretrained language models from natural texts (Yu et al., 2022), and (iii) task instructions as supervision (Wang et al., 2022).
## 3 Specification Of The Tutorial
The proposed tutorial is considered a **cutting-edge**
tutorial that introduces new frontiers in indirectly supervised NLP. The presented topic has not been covered by any ∗CL tutorials in the past 4 years.
Audience and Prerequisites Based on the level of interest in this topic, we expect around 150 participants. While no specific background knowledge is assumed of the audience, it would be best for the attendees to know about basic deep learning technologies, pre-trained language models (e.g. BERT).
A reading list that could help provide background knowledge to the audience before attending this tutorial is given in Appx. §A.2.
Breadth We estimate that at least 60% of the work covered in this tutorial is from researchers other than the instructors of the tutorial.
Diversity Considerations This tutorial will cover indirect supervision from beyond text. We will also cover content around how indirect supervision can be applicable to a variety of low-resourced tasks. Our presenter team has a diverse background from both academia (including assistant, associate, distinguished professors, and a senior Ph.D. student) and industry (a senior scientist at AWS AI).
Our instructor team will promote our tutorial on social media to diversify our audience participation.
Material Access Online Open Access All the materials are openly available at https://cogcomp.seas.upenn.edu/
page/tutorial.202307
## 4 Tutorial Instructors
The following are biographies of the speakers. Past tutorials given by us are listed in Appx. §A.1.
Wenpeng Yin is an Assistant Professor in the Department of Computer Science and Engineering at Penn State University. Prior to joining Penn State, he was a tenure-track faculty member at Temple University (1/2022-12/2022), Senior Research Scientist at Salesforce Research
(8/2019-12/2021), a postdoctoral researcher at UPenn (10/2017-7/2019), and got his Ph.D. degree from the Ludwig Maximilian University of Munich, Germany, in 2017. Dr. Yin's research focuses on natural language processing with three sub-areas: (i) learning from task instructions; (ii)
information extraction; (iii) learning with limited supervision. Additional information is available at www.wenpengyin.org.
Muhao Chen is an Assistant Research Professor of Computer Science at USC, where he directs the Language Understanding and Knowledge Acquisition (LUKA) Group. His research focuses on data-driven machine learning approaches for natural language understanding and knowledge acquisition. His work has been recognized with an NSF CRII Award, a Cisco Faculty Research Award, an ACM SIGBio Best Student Paper Award, and a Best Paper Nomination at CoNLL.
Muhao obtained his PhD degree from UCLA Department of Computer Science in 2019, and was a postdoctoral researcher at UPenn prior to joining USC. Additional information is available at http://luka-group.github.io.
Ben Zhou is a fourth-year Ph.D. student at the Department of Computer and Information Science, University of Pennsylvania. Ben's research interests are distant supervision extraction and experiential knowledge reasoning, and he has more than 5 recent papers on related topics. He is a recipient of the ENIAC fellowship from the University of Pennsylvania, and a finalist of the CRA outstanding undergraduate researcher award. Additional information is available at http://xuanyu.me/.
Qiang Ning is currently a senior applied scientist at AWS AI (2022-). Prior to that, Qiang was an applied scientist at Alexa AI (2020-2022)
and a research scientist at the Allen Institute for AI (2019-2020). Qiang received his Ph.D. from the University of Illinois at Urbana-Champaign in 2019 in Electrical and Computer Engineering. Qiang's research interests span in information extraction, question answering, and the application of weak supervision methods in these NLP problems in both theoretical and practical aspects. Additional information is available at https://www.qiangning.info/.
Kai-Wei Chang is an associate professor in the Department of Computer Science at the University of California Los Angeles. His research interests include designing robust, fair, and accountable machine learning methods for building reliable NLP systems. His awards include the EMNLP
Best Long Paper Award (2017), the KDD Best Paper Award (2010), and the Sloan Resaerch Fellowship (2021). Kai-Wei has given tutorials at NAACL 15, AAAI 16, FAccT18, EMNLP 19, AAAI 20, EMNLP 21, MLSS 21 on different research topics. Additional information is available at http://kwchang.net.
Dan Roth is the Eduardo D. Glandt Distinguished Professor at the Department of Computer and Information Science, UPenn, the NLP Lead at AWS AI Labs, and a Fellow of the AAAS,
ACM, AAAI, and ACL. In 2017 Roth was awarded the John McCarthy Award, the highest award the AI community gives to mid-career AI researchers.
Roth was recognized "for major conceptual and theoretical advances in the modeling of natural language understanding, machine learning, and reasoning." Roth has published broadly in machine learning, NLP, KRR, and learning theory, and has given keynote talks and tutorials in all ACL and AAAI major conferences. Roth was the Editor-inChief of JAIR until 2017, and was the program chair of AAAI'11, ACL'03 and CoNLL'02; he serves regularly as an area chair and senior program committee member in the major conferences in his research areas. Additional information is available at www.cis.upenn.edu/~danroth.
## Acknowledgement
This presenters' research is supported in part by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA),
the DARPA MCS program under Contract No.
N66001-19-2-4033 with the United States Office Of Naval Research, Intelligence Advanced Research Projects Activity (IARPA) Contract No.
2019-19051600006 under the BETTER Program, the National Science Foundation (NSF) of United States Grant IIS 2105329, a subaward from NSF
Cloudbank 1925001 through UCSD, an Amazon Research Award and a Cisco Research Award. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein,
## Ethical Considerations
We do not anticipate any ethical issues particularly to the topics of the tutorial. Nevertheless, some work presented in this tutorial extensively uses large-scale pretrained models with self-attention, which may lead to substantial financial and environmental costs.
## References
Jiarun Cao and Sophia Ananiadou. 2021. GenerativeRE: Incorporating a novel copy mechanism and pretrained model for joint entity and relation extraction. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2119–2126, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Muhao Chen, Weijia Shi, Ben Zhou, and Dan Roth.
2021. Cross-lingual entity alignment with incidental supervision. In *Proceedings of the 16th Conference* of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 645–658. Association for Computational Linguistics.
Muhao Chen, Hongming Zhang, Haoyu Wang, and Dan Roth. 2020. What are you trying to do? semantic typing of event processes. In *Proceedings of* the 24th Conference on Computational Natural Language Learning, pages 531–542, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186.
Jiangshu Du, Wenpeng Yin, Congying Xia, and Philip S.
Yu. 2023. Learning to select from multiple options.
In *AAAI*.
Hangfeng He, Qiang Ning, and Dan Roth. 2020. Quase:
Question-answer driven sentence encoding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8743–8758.
Hangfeng He, Mingyuan Zhang, Qiang Ning, and Dan Roth. 2021. Foreseeing the Benefits of Incidental Supervision. In *Proc. of the Conference on Empirical* Methods in Natural Language Processing (EMNLP).
I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, and Nanyun Peng. 2022. DEGREE: A data-efficient generation-based event extraction model. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1890–1908, Seattle, United States. Association for Computational Linguistics.
James Y. Huang, Bangzheng Li, Jiashu Xu, and Muhao Chen. 2022. Unified semantic typing with meaningful label inference. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2642–2654, Seattle, United States. Association for Computational Linguistics.
Kung-Hsiang Huang, Sam Tang, and Nanyun Peng.
2021. Document-level entity-based extraction as template generation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5257–5269, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Unifying question answering and text classification via span extraction. *CoRR*,
abs/1904.09286.
Bangzheng Li, Wenpeng Yin, and Muhao Chen. 2022a.
Ultra-fine entity typing with indirect supervision from natural language inference. *Transactions of the* Association for Computational Linguistics, 10:607–
622.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language.
In *Arxiv*.
Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, and Kai-Wei Chang. 2021.
Unsupervised vision-and-language pre-training without parallel images and captions. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5339–5350, Online. Association for Computational Linguistics.
Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, KaiWei Chang, and Jianfeng Gao. 2022b. Grounded language-image pre-training. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10955–10965. IEEE.
Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC
framework for named entity recognition. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 5849–5859, Online. Association for Computational Linguistics.
Keming Lu, I-Hung Hsu, Mingyu Derek Ma, Wenxuan Zhou, and Muhao Chen. 2022a. Summarization as indirect supervision for relation extraction. In *EMNLP*
- Findings.
Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022b. Learn to explain:
Multimodal reasoning via thought chains for science question answering. In *NeurIPS*.
Qing Lyu, Hongming Zhang, Elior Sulem, and Dan Roth. 2021. Zero-shot event extraction via transfer learning: Challenges and insights. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers), pages 322–332, Online.
Association for Computational Linguistics.
Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering.
CoRR, abs/1806.08730.
Qiang Ning, Hangfeng He, Chuchu Fan, and Dan Roth.
2019. Partial or Complete, That's The Question. In Proc. of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
Ehsan Qasemi, Piyush Khanna, Qiang Ning, and Muhao Chen. 2022. PInKS: Preconditioned commonsense inference with minimal supervision. In *Proceedings* of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 320–336, Online only. Association for Computational Linguistics.
Nazneen Fatema Rajani, Ben Krause, Wengpeng Yin, Tong Niu, Richard Socher, and Caiming Xiong.
2020. Explaining and improving model behavior with k nearest neighbor representations. *CoRR*,
abs/2010.09030.
Dan Roth. 2017. Incidental supervision: Moving beyond supervised learning. In *Proceedings of the* Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 4885–4890.
Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, and Eneko Agirre. 2021. Label verbalization and entailment for effective zero and few-shot relation extraction. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event
/ Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1199–1212.
Oscar Sainz, Itziar Gonzalez-Dios, Oier Lopez de Lacalle, Bonan Min, and Eneko Agirre. 2022. Textual entailment for event argument extraction: Zero- and few-shot with multi-source learning. In *Findings* of the Association for Computational Linguistics:
NAACL 2022, Seattle, WA, United States, July 1015, 2022, pages 2439–2455.
Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100–5111, Hong Kong, China. Association for Computational Linguistics.
Hao Tan and Mohit Bansal. 2020. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2066–2080, Online. Association for Computational Linguistics.
Brendan van Rooyen and Robert C. Williamson. 2018.
A Theory of Learning with Corrupted Labels. *Journal of Machine Learning Research*, 18(228):1–50.
Kaifu Wang, Qiang Ning, and Dan Roth. 2020. Learnability with Indirect Supervision Signals. In Proc.
of the Conference on Neural Information Processing Systems (NeurIPS).
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Hannaneh Hajishirzi, Noah A. Smith, and Daniel Khashabi. 2022. Benchmarking generalization via in-context instructions on 1, 600+ language tasks. *CoRR*,
abs/2204.07705.
Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. 2020. CorefQA: Coreference resolution as querybased span prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6953–6963, Online. Association for Computational Linguistics.
Congying Xia, Wenpeng Yin, Yihao Feng, and Philip S.
Yu. 2021. Incremental few-shot text classification with multi-round new classes: Formulation, dataset and system. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1351–1360.
Hanzi Xu, Slobodan Vucetic, and Wenpeng Yin. 2022.
Openstance: Real-world zero-shot stance detection.
volume Proceedings of CoNLL.
Hongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei Huang, and Huajun Chen.
2021. Contrastive triple extraction with generative transformer. In *AAAI*.
Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3912–3921.
Wenpeng Yin, Dragomir R. Radev, and Caiming Xiong.
2021. Docnli: A large-scale dataset for documentlevel natural language inference. In Findings of ACL/IJCNLP, pages 4913–4922.
Donghan Yu, Chenguang Zhu, Yiming Yang, and Michael Zeng. 2022. Jaket: Joint pre-training of knowledge graph and language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11630–11638.
Hongyi Yuan, Zheng Yuan, and Sheng Yu. 2022. Generative biomedical entity linking via knowledge baseguided pre-training and synonyms-aware fine-tuning.
In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4038–4048, Seattle, United States. Association for Computational Linguistics.
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In *The IEEE Conference on* Computer Vision and Pattern Recognition (CVPR).
Daojian Zeng, Haoran Zhang, and Qianying Liu. 2020.
Copymtl: Copy mechanism for joint extraction of entities and relations with multi-task learning. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 9507–9514.
Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1:
Long Papers), pages 506–514, Melbourne, Australia.
Association for Computational Linguistics.
Haotian* Zhang, Pengchuan* Zhang, Xiaowei Hu, YenChun Chen, Liunian Harold Li, Xiyang Dai, Lijuan Wang, Lu Yuan, Jenq-Neng Hwang, and Jianfeng Gao. 2022. Glipv2: Unifying localization and vision-language understanding. *arXiv preprint* arXiv:2206.05836.
Wenzheng Zhang, Wenyue Hua, and Karl Stratos. 2021a.
Entqa: Entity linking as question answering. In *International Conference on Learning Representations*.
Yue Zhang, Hongliang Fei, and Ping Li. 2021b. Readsre: Retrieval-augmented distantly supervised relation extraction. In *Proceedings of the 44th International ACM SIGIR Conference on Research and* Development in Information Retrieval, pages 2257–
2262.
Ben Zhou, Qiang Ning, Daniel Khashabi, and Dan Roth.
2020. Temporal Common Sense Acquisition with Minimal Supervision. In Proc. of the Annual Meeting of the Association for Computational Linguistics
(ACL).
Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2021. Temporal reasoning on implicit events from distant supervision.
NAACL.
Ben Zhou, Kyle Richardson, Xiaodong Yu, and Dan Roth. 2022a. Learning to decompose: Hypothetical question decomposition based on comparable texts.
EMNLP.
Ben Zhou, Dian Yu, Dong Yu, and Dan Roth. 2022b.
Cross-lingual speaker identification using distant supervision. *Arxiv*.
## A Appendix A.1 Past Tutorials By The Instructors
The presenters of this tutorial have given the following tutorials at leading international conferences in the past.
- Muhao Chen:
- NAACL'22: New Frontiers of Information Extraction.
- ACL'21: Event-Centric Natural Language Processing.
- AAAI'21: Event-Centric Natural Language Understanding.
- KDD'21: From Tables to Knowledge: Recent Advances in Table Understanding.
- AAAI'20: Recent Advances of Transferable Representation Learning.
- ACL'21: Event-Centric Natural Language Processing.
- AAAI'21: Event-Centric Natural Language Understanding.
- Ben Zhou:
- NAACL'22: New Frontiers of Information Extraction
- Kai-Wei Chang:
- EMNLP'21: Robustness and Adversarial Examples in Natural Language Processing
- AAAI'20: Recent Advances of Transferable Representation Learning.
- EMNLP '19: A tutorial on Bias and Fairness in Natural Language Processing.
- ACM FAT*'18: A tutorial on Quantifying and Reducing Gender Stereotypes in Word Embeddings.
- TAAI'17: A tutorial on Structured Predictions:
Practical Advancements and Applications in Natural Language Processing.
- AAAI'16: A tutorial on Learning and Inference in Structured Prediction Models.
- NAACL'15: A tutorial on Hands-on Learning to Search for Structured Prediction.
- NAACL'22: New Frontiers of Information Extraction.
- ACL'21: Event-Centric Natural Language Processing.
- AAAI'21: Event-Centric Natural Language Understanding.
- ACL'20: Commonsense Reasoning for Natural Language Processing.
- AAAI'20: Recent Advances of Transferable Representation Learning.
- ACL'18: A tutorial on Multi-lingual Entity Discovery and Linking.
- EACL'17: A tutorial on Integer Linear Programming Formulations in Natural Language Processing.
- Kaifu Wang, Qiang Ning, and Dan Roth. Learnability with Indirect Supervision Signals. NeurIPS
2020.
## A.2 Recommended Paper List
- AAAI'16: A tutorial on Structured Prediction.
- ACL'14: A tutorial on Wikification and Entity Linking.
- AAAI'13: Information Trustworthiness.
- COLING'12: A Tutorial on Temporal Information Extraction and Shallow Temporal Reasoning. - NAACL'12: A Tutorial on Constrained Conditional Models: Structured Predictions in NLP. - NAACL'10: A Tutorial on Integer Linear Programming Methods in NLP.
- EACL'09: A Tutorial on Constrained Conditional Models.
- ACL'07: A Tutorial on Textual Entailment.
The following is a reading list that could help provide background knowledge to the audience before attending this tutorial:
- Wenpeng Yin, Jamaal Hay, Dan Roth. Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach. EMNLP 2019.
- Oscar Sainz, Itziar Gonzalez-Dios, Oier Lopez de Lacalle, Bonan Min, Eneko Agirre. Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning. Findings of NAACL 2022.
- Wenzheng Zhang, Wenyue Hua, Karl Stratos. EntQA: Entity Linking as Question Answering. ICLR
2022.
- Keming Lu, I-Hung Hsu, Wenxuan Zhou, Mingyu Derek Ma, Muhao Chen. Summarization as Indirect Supervision for Relation Extraction. EMNLP -
Findings, 2022.
- Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark O. Riedl, Yejin Choi.
Reframing human-AI collaboration for generating free-text explanations. NAACL, 2022.
- Ben Zhou, Kyle Richardson, Xiaodong Yu, Dan Roth. Learning to decompose: Hypothetical question decomposition based on comparable texts.
EMNLP, 2022.
- Hangfeng He, Mingyuan Zhang, Qiang Ning, and Dan Roth. Foreseeing the Benefits of Incidental Supervision. EMNLP 2021.
- Qiang Ning:
- Dan Roth:
- Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. CVPR 2019.
- Hao Tan and Mohit Bansal. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. EMNLP 2020. |
asai-etal-2023-retrieval | Retrieval-based Language Models and Applications | https://aclanthology.org/2023.acl-tutorials.6 | Retrieval-based language models (LMs) have shown impressive performance on diverse NLP tasks. In this tutorial, we will provide a comprehensive and coherent overview of recent advances in retrieval-based LMs. We will start by providing preliminaries covering the foundation of LMs (e.g., masked LMs, autoregressive LMs) and retrieval systems (e.g., nearest-neighbor search). We will then detail recent progress in retrieval-based models, focusing on their model architectures and learning approaches. Finally, we will show how retrieval-based LMs are adapted to downstream applications, and extended to multilingual and multi-modal settings. Finally, we will use an exercise to showcase the effectiveness of retrieval-based LMs. | # Tutorial Proposal: Retrieval-Based Language Models And Applications
Akari Asai† Sewon Min† Zexuan Zhong‡ **Danqi Chen**‡
† University of Washington ‡Princeton University
{akari,sewon}@cs.washington.edu
{zzhong,danqic}@cs.princeton.edu
## 1 Description
Language models (LMs) such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022)
have shown impressive abilities in a range of natural language processing (NLP) tasks. However, relying solely on their parameters to encode a wealth of world knowledge requires a prohibitively large number of parameters and hence massive compute, and they often struggle to learn long-rail knowledge (Roberts et al., 2020; Kandpal et al., 2022; Mallen et al., 2022). Moreover, these parametric LMs are fundamentally incapable of adapting over time (De Cao et al., 2021; Lazaridou et al.,
2021; Kasai et al., 2022), often hallucinate (Shuster et al., 2021), and may leak private data from the training corpus (Carlini et al., 2021). To overcome these limitations, there has been growing interest in retrieval-based LMs (Guu et al., 2020; Khandelwal et al., 2020; Borgeaud et al., 2022; Zhong et al., 2022; Izacard et al., 2022b; Min et al., 2022),
which incorporate a non-parametric datastore (e.g.,
text chunks from an external corpus) with their parametric counterparts. Retrieval-based LMs can outperform LMs without retrieval by a large margin with much fewer parameters (Mallen et al.,
2022), can update their knowledge by replacing their retrieval corpora (Izacard et al., 2022b), and provide citations for users to easily verify and evaluate the predictions (Menick et al., 2022; Bohnet et al., 2022).
Previously, retrieval and LMs have been studied mostly separately, and only recently researchers have integrated them and built systems in which retrieval and LMs interact more organically, and a number of retrieval-based LMs have been proposed due to growing interest. They differ in their neural architectures (e.g., the granularity of retrieval units, how to integrate retrieved information), learning algorithms, and different uses in downstream applications. In this tutorial, we aim to provide a comprehensive and coherent overview of recent advances in retrieval-based LMs. We will start by first providing preliminaries covering the foundations of LM (e.g., masked LMs, autoregressive LMs) and retrieval systems (e.g., nearest-neighbor search methods widely used in neural retrieval systems; Karpukhin et al. 2020). We will then focus on recent progress in architectures, *learning approaches*, and *applications* of retrieval-based LMs.
A taxonomy of architectures We introduce a taxonomy of architectures of retrieval-based LMs based on a variety of dimensions. Retrieval-based LMs can be categorized by the granularity of retrieved units stored in the datastore: either 1) a chunk of text (Borgeaud et al., 2022; Izacard et al.,
2022b), or 2) a token (Khandelwal et al., 2020; Zhong et al., 2022; Min et al., 2022), or 3) an entity mention (Févry et al., 2020; de Jong et al.,
2022). We also plan to cover techniques for refining data stores and improving similarity search (He et al., 2021; Alon et al., 2022). At the same time, retrieval-base LMs can be categorized based on how the retrieved information is integrated with the parametric encoder: 1) whether retrieved components are concatenated with the original input text (Lewis et al., 2020; Guu et al., 2020; Izacard et al., 2022b), 2) whether the retrieved components are latent and integrated into the intermediate layers of Transformers (de Jong et al., 2022; Févry et al., 2020; Borgeaud et al., 2022), or 3) distribution of tokens from the retrieved components and the LMs are interpolated (Khandelwal et al., 2020; Zhong et al., 2022; Yogatama et al., 2021).
Scalable learning algorithms Then, we discuss the *training approaches* of retrieval-based LMs.
Since a retrieval datastore is typically very large, how to train retrieval-based LMs effectively and efficiently remains challenging. We first discuss pipelined approaches that train retrieval components and LMs separately, either through large41 scale pre-training (Izacard et al., 2022a) or multitask instruction tuning (Asai et al., 2022). Several other works train retrieval-based LMs with a fixed retrieval module (Borgeaud et al., 2022; Yogatama et al., 2021). We then discuss joint training under reasonable resource requirements: either through in-batch approximations to a full datastore, or updating the datastore with updated parameters asynchronously. The former uses fractions of the full corpus that are carefully designed during joint training (Zhong et al., 2022; de Jong et al., 2022; Min et al., 2022). The latter, on the other hand, aims to use full corpus during training with asynchronous index update for every certain time steps (Izacard et al., 2022b; Guu et al., 2020).
Adaption to downstream tasks After discussing the basic building blocks of retrieval-based LMs, we show how retrieval-based LMs are adapted to downstream applications. We first briefly summarize the two approaches to adapt a model to a new task: zero-shot or few-shot prompting without any parameter updates (Shi et al., 2022; Wang et al., 2022), and fine-tuning on target task data (Lewis et al., 2020). We then discuss methods designed to build more powerful retrieval-based LMs for certain downstream tasks, such as dialogue (Shuster et al., 2021), semantic parsing (Pasupat et al.,
2021), and machine translation (Khandelwal et al., 2021; Zheng et al., 2021).
Up to this point, our tutorial has mainly focused on retrieving and integrating English plain text. At this end, we will cover recent extensions of retrieval-based LMs beyond English text, including multilingual (Asai et al., 2021), multimodal (Chen et al., 2022; Yasunaga et al., 2022)
and code (Parvez et al., 2021) retrieval. These works often extend dense retrieval models to enable retrieval between heterogeneous input spaces (e.g.,
cross-lingual, cross-modal) and have shown that referring retrieved knowledge leads to knowledgeintensive generation.
Finally, we will use an exercise to showcase the effectiveness of retrieval-based LMs. We conclude our tutorial by discussing several important questions and future directions, including (1) how we can further improve the scalability of retrievalbased LMs without sacrificing performance, (2)
when retrieval-based LMs are particularly useful in the era of rapidly evolving LMs, and (3) what is necessary to enable applications of retrieval-based LMs for more diverse domains.
## 2 Tutorial Outline
1. Introduction (15 minutes)
- An overview of the tutorial
- Why retrieval-based LMs?
2. Preliminaries (15 minutes)
- Language models: Auto-regressive LMs vs.
masked LMs
- Dense retrieval methods
- Approximate nearest neighbor search 3. Retrieval-based LMs: A taxonomy of architectures (40 minutes)
- Granularity of datastore: tokens, entity mentions, and chunks of text - How retrieved information is integrated: incorporation in the input layer, intermediate layers, and the output layer
4. Retrieval-based LMs: Scalable learning algorithms (40 minutes)
- Pipelined training
- Training with In-batch approximations
- Joint training of retrieval and LMs with asynchronous updates of corpus
5. Retrieval-based LMs: Downstream adaptations (40 minutes)
- Adaptation methods: zero-shot/few-shot prompting and fine-tuning on downstream tasks
- Downstream applications and task-specific modifications (e.g., dialogue, semantic parsing)
6. Extensions beyond English text (10 minutes)
- Multilingual retrieval-based LMs
- Multimodal retrieval-based LMs
- Code generation 7. Demostration: An exercise to show retrievalaugmented LMs (10 minutes)
8. Conclusions and future directions (10 minutes)
## 3 Tutorial Information Type Of The Tutorial Cutting-Edge.
Length This is a 3-hour tutorial.
Target audience The tutorial will be accessible to anyone who has a basic knowledge of machine learning and natural language processing. We think the topic will be of interest to both NLP researchers/students in academia and NLP practitioners in the industry.
Breadth We estimate that 20% of the work covered in this tutorial will be by the presenters and the remaining 80% by others. The papers we will cover are from both academia and industry.
Diversity considerations. The speakers are from two academic institutions with an affiliation with an industry research group, including both a professor and Ph.D. students. Three out of four speakers are female. The methods covered by our tutorials can scale up to various languages or domains, and we also briefly cover several papers focusing on multilingual and expert-domain extensions of the core frameworks. We will reach out to academic communities such as WiNLP1and Masakhane2to encourage them to attend our tutorial for participation of diverse audiences. Since retrieval-based LMs are alternatives to LMs with a significantly large number of parameters, we expect this tutorial to be especially useful to researchers with modest resources who do no have access to very large models.
An estimate of the audience size Given that language models are now used in a range of NLP tasks and retrieval-based approaches have been applied to diverse domains, we estimate that the number of audiences will be around 150+.
Venues. We prefer ACL due to the growing interest in the area and the travel constraints of some of the speakers. EMNLP is our second preferred choice, and we currently do not consider EACL.
Technical equipment. We would like to have Internet access to show online demos.
Open access We plan to make all teaching material available online and agree to allow the publication of slides and video recordings in the ACL
anthology.
1http://www.winlp.org/
2https://www.masakhane.io/
Ethical considerations Retrieval-based LMs are often more powerful and parameter-efficient than LMs, and do not require full re-training to update world knowledge, which makes it more energyefficient and can reduce carbon footprints. Prior work also shows that referring to external world knowledge can reduce harmful biases and hallucinations, although retrieval-based LMs can still be plausible sounding but incorrect or non-sensical outputs. We note that, as retrieval-based LMs may retrieve raw data from a corpus, which can leak privacy-sensitive information, especially when they are built on top of a private corpus. We acknowledge this to caution those who manage to apply retrieval-based LMs to privacy-sensitive domains.
Pedagogical material We plan to do some short hands-on exercises to let the audience try different retrieval-based LMs with few-shot prompting using Colab.
## Past Tutorials.
- ACL 2020 tutorial on Open-domain QA (Chen and Yih, 2020): This tutorial provides comprehensive reviews of open-domain question answering, some of which consist of a retriever and a generative model, while we focus on the recent progress of architectures and learning algorithms of retrieval-based LMs for diverse NLP tasks, not limiting its focus to open-domain QA. Most of the papers will be discussed in this tutorial have been published since the Open-domain QA tutorial three years ago. Moreover, one of the instructors, Danqi was an instructor of this ACL 2020 tutorial.
- SIGIR 2022 tutorial on Recent Advances in Retrieval-Augmented Text Generation (Cai et al., 2022): This tutorial focuses mainly on recent retrieval-augmented text generation approaches with a focus on two applications:
dialogue and machine translation. Our tutorial puts more emphasis on the architecture and learning methods of retrieval-based LMs that can be applicable to diverse NLP tasks.
## 4 Presenters
Akari Asai Akari Asai is a Ph.D. student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, advised by Prof. Hannaneh Hajishirzi. Her research lies in natural language processing and machine learning. Her recent research focuses on question answering, retrieval-based LMs, multilingual NLP,
and entity-aware representations. She received the IBM Fellowship in 2022. She is a lead organizer of the Workshop on Multilingual Information Access (NAACL 2022) and serves as an area chair in question answering at EACL 2023.
Sewon Min Sewon Min is a Ph.D. student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, and a visiting researcher at Meta AI. Her research spans question answering, representation and retrieval of factoid knowledge, and language modeling. She was a co-instructor and a co-organizer of multiple tutorials and workshops at ACL, NAACL-HLT,
EMNLP, NeurIPS and AKBC, including a tutorial on Few-Shot NLP with Pretrained Language Models (ACL 2022), a tutorial on NLP for Long Sequences (NAACL-HLT 2021), and the Workshop on Semiparametric Methods in NLP (ACL 2022).
Zexuan Zhong Zexuan Zhong is a Ph.D. student in the Department of Computer Science at Princeton University, advised by Prof. Danqi Chen. His research interests lie in natural language processing and machine learning. His recent research focuses on retrieval-based LMs, generalization of retrieval models, and efficient models in NLP. He received a J.P. Morgan PhD Fellowship in 2022.
Danqi Chen Danqi Chen is an Assistant Professor of Computer Science at Princeton University and co-leads the Princeton NLP Group. Her recent research focuses on training, adapting, and understanding large LMs, and developing scalable and generalizable NLP systems for question answering, information extraction, and conversational agents. Danqi is a recipient of a Sloan Fellowship, a Samsung AI Researcher of the Year award, outstanding paper awards from ACL 2016, EMNLP 2017 and ACL 2022, and multiple industry faculty awards. Danqi served as the program chair for AKBC 2021 and (senior) area chairs for many
*ACL conferences. She taught a tutorial on "Opendomain Question Answering" at ACL 2020.
## 5 Reading List
- Unsupervised Dense Information Retrieval with Contrastive Learning (Izacard et al.,
2022a)
- Task-aware Retrieval with Instructions (Asai et al., 2022)
- Atlas: Few-shot Learning with Retrieval Augmented Language Models (Izacard et al.,
2022b)
- Improving language models by retrieving from trillions of tokens (Borgeaud et al., 2022)
- Mention Memory: incorporating textual knowledge into Transformers through entity mention attention (de Jong et al., 2022)
- Generalization through Memorization: Nearest Neighbor Language Models (Khandelwal et al., 2020)
- Nonparametric Masked Language Model (Min et al., 2022)
- Training Language Models with Memory Augmentation (Zhong et al., 2022)
- kNN-Prompt: Nearest Neighbor Zero-Shot Inference (Shi et al., 2022)
- Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval (Alon et al.,
2022)
## References
Uri Alon, Frank F. Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig. 2022.
Neuro-symbolic language modeling with automatonaugmented retrieval. In International Conference on Machine Learning (ICML), Baltimore, USA.
Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, and Wen-tau Yih. 2022. Task-aware retrieval with instructions. *arXiv preprint arXiv:2211.09260*.
Akari Asai, Xinyan Yu, Jungo Kasai, and Hanna Hajishirzi. 2021. One question answering model for many languages with cross-lingual dense passage retrieval. In *Advances in Neural Information Processing Systems*.
Bernd Bohnet, Vinh Q Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, et al.
2022. Attributed question answering: Evaluation and modeling for attributed large language models. *arXiv* preprint arXiv:2212.08037.
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In Proceedings of the 39th International Conference on Machine Learning.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In *Advances in neural information processing systems*.
Deng Cai, Yan Wang, Lemao Liu, and Shuming Shi.
2022. Recent advances in retrieval-augmented text generation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21).
Danqi Chen and Wen-tau Yih. 2020. Open-domain question answering. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics: Tutorial Abstracts, pages 34–37, Online.
Association for Computational Linguistics.
Wenhu Chen, Hexiang Hu, Xi Chen, Pat Verga, and William W Cohen. 2022. Murag: Multimodal retrieval-augmented generator for open question answering over images and text. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. PaLM: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311.
Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*.
Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, and William W. Cohen. 2022. Mention memory: incorporating textual knowledge into transformers through entity mention attention. In *International Conference on Learning Representations*.
Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. 2020. Entities as experts: Sparse memory access with entity supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *International Conference on Machine Learning*.
Junxian He, Graham Neubig, and Taylor BergKirkpatrick. 2021. Efficient nearest neighbor language models. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, Punta Cana, Dominican Republic.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022a. Unsupervised dense information retrieval with contrastive learning. *Transactions* on Machine Learning Research.
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022b. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299.
Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large language models struggle to learn long-tail knowledge. *arXiv* preprint arXiv:2211.08411.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics.
Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui.
2022. Realtime qa: What's the answer right now?
arXiv preprint arXiv:2207.13332.
Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In International Conference on Learning Representations.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In *International Conference on Learning* Representations.
Angeliki Lazaridou, Adhi Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, et al. 2021. Mind the gap:
Assessing temporal generalization in neural language
models. Advances in Neural Information Processing Systems.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020.
Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Advances in Neural Information Processing Systems*.
Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi.
2022. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. *arXiv preprint* arXiv:2212.10511.
Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy CampbellGillingham, Geoffrey Irving, et al. 2022. Teaching language models to support answers with verified quotes. *arXiv preprint arXiv:2203.11147*.
Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wentau Yih, Hannaneh Hajishirzi, and Luke Zettlemoyer.
2022. Nonparametric masked language modeling.
arXiv preprint arXiv:2212.01349.
Md Rizwan Parvez, Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Retrieval augmented code generation and summarization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2719–2734, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Panupong Pasupat, Yuan Zhang, and Kelvin Guu. 2021.
Controllable semantic parsing via retrieval augmentation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Adam Roberts, Colin Raffel, and Noam Shazeer. 2020.
How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.
Weijia Shi, Julian Michael, Suchin Gururangan, and Luke Zettlemoyer. 2022. Nearest neighbor zero-shot inference. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In Findings of the Association for Computational Linguistics:
EMNLP 2021.
Zhenhailong Wang, Xiaoman Pan, Dian Yu, Dong Yu, Jianshu Chen, and Heng Ji. 2022. Zemi: Learning zero-shot semi-parametric language models from multiple tasks. *arXiv preprint arXiv:2210.00185*.
Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2022. Retrievalaugmented multimodal language modeling. arXiv preprint arXiv:2211.12561.
Dani Yogatama, Cyprien de Masson d'Autume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362–373.
Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, and Jiajun Chen. 2021.
Adaptive nearest neighbor machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.
Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Training language models with memory augmentation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. |
pu-demberg-2023-chatgpt | {C}hat{GPT} vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer | https://aclanthology.org/2023.acl-srw.1 | Large-scale language models, like ChatGPT, have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT{'}s performance in two controllable generation tasks, with respect to ChatGPT{'}s ability to adapt its output to different target audiences (expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model{'}s performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style. | # Chatgpt Vs Human-Authored Text: Insights Into Controllable Text Summarization And Sentence Style Transfer
Dongqi Pu and **Vera Demberg**
Department of Computer Science Department of Language Science and Technology Saarland Informatics Campus, Saarland University, Germany
{dongqipu,vera}@lst.uni-saarland.de
## Abstract 1 Introduction
Generative Pre-trained Transformer (GPT; *e.g.,*
ChatGPT) models, which produce results from given conditional input prompts, have exhibited exceptional performance on various natural language understanding (NLU) and generation (NLG)
tasks (Jiao et al., 2023; Wang et al., 2023a; Bang et al., 2023b; Zhou et al., 2023; Dai et al., 2023).
For instance, in NLU tasks, Qin et al. (2023) have proved that ChatGPT is comparable to state-ofthe-art fine-tuning models in language reasoning.
In NLG tasks, Yang et al. (2023a) assessed four widely used benchmark datasets, such as QMSum, and confirmed ChatGPT's comparability to traditional fine-tuning methods. Peng et al. (2023) further investigated effective strategies for machine translation using ChatGPT and highlight its strong 1The project information of our study can be accessed at https://dongqi.me/projects/ChatGPT_vs_Human.
translation ability. Additionally, ChatGPT can even facilitate multi-modal tasks (Yang et al., 2023b; Shen et al., 2023), as well as the application of data augmentation (Dai et al., 2023). Although the studies mentioned above have demonstrated notable performance of ChatGPT across different domains, there remains a dearth of qualitative and quantitative evaluation of the texts generated by ChatGPT.
Such an evaluation is vital to uncover the behavioral differences, potential limitations, and challenges associated with ChatGPT-generated texts, especially when compared with human-authored texts.
Controllable text generation seems to be a task in which ChatGPT-like models could potentially excel. This task is driven by the desire to tailor text for a diverse array of target users (*e.g.,* experts and laypersons) (Kumar et al., 2022; Cao et al., 2020; Luo et al., 2022), and thereby enhancing the accessibility of textual information. In controllable text generation, one delineates a particular set of parameters or provides a prompt that defines the intended target style. This area has recently received growing interest from researchers in the field (Hu and Li, 2021; Li et al., 2022; Zhang et al., 2022; Dathathri et al., 2019a; August et al., 2022; Carlsson et al., 2022; Gu et al., 2022; Li et al., 2022; Keskar et al., 2019; Dathathri et al.,
2019b). The traditional natural language generation task (Pu and Sima'an, 2022), which focuses solely on adequately responding with respect to a given input, can be regarded as a special case of controllable natural language generation, wherein the control setting remains unconditioned. Considering ChatGPT as the most recent language generation capability, the assessment of its language generation proficiency, specifically in the realm of controllable language generation, remains largely uncharted. Therefore, our study delves into two distinct applications of ChatGPT, namely controllable summary generation and sentence style trans-
Large-scale language models, like ChatGPT,
have garnered significant media attention and stunned the public with their remarkable capacity for generating coherent text from short natural language prompts. In this paper, we aim to conduct a systematic inspection of ChatGPT's performance in two controllable generation tasks, with respect to ChatGPT's ability to adapt its output to different target audiences
(expert vs. layman) and writing styles (formal vs. informal). Additionally, we evaluate the faithfulness of the generated text, and compare the model's performance with human-authored texts. Our findings indicate that the stylistic variations produced by humans are considerably larger than those demonstrated by ChatGPT, and the generated texts diverge from human samples in several characteristics, such as the distribution of word types. Moreover, we observe that ChatGPT sometimes incorporates factual errors or hallucinations when adapting the text to suit a specific style.1 fer. In the former, we examine ChatGPT's ability to generate summaries that cater to two distinct readerships, namely experts and non-experts, for a given academic literature. Concerning sentence style transfer, we investigate ChatGPT's capability to generate both formal and informal sentences for the task of sentence formality.
The objective of this study is to tackle the research question: **In relation to the humanproduced text, to what extent does ChatGPTgenerated content demonstrate significant divergence from human behavior and the potential**
susceptibility to inaccuracies? Our primary contributions are enumerated below:
- To the best of our knowledge, we are the first to utilize ChatGPT to evaluate its effectiveness in controllable text generation.
- Our findings indicate that there are substantial performance disparities between the text generated by ChatGPT and that generated by humans.
- Our study exposes and quantifies the existence of numerous hard-to-spot errors in the text generated by ChatGPT, which have a tendency to amplify with successive transformations of the text.
## 2 Related Work 2.1 Controllable Text Summarization
Controllable text summarization is a rapidly evolving field that aims to produce summaries with specific characteristics, such as length, style, or content (Shen et al., 2022b; Chan et al., 2021; Sarkhel et al., 2020; Shen et al., 2022a; Goldsack et al.,
2022; Keskar et al., 2019; Dathathri et al., 2019b; He et al., 2022; Earle et al., 2021; Liu et al., 2022b).
A range of approaches has been proposed for this task, including the use of sequence-to-sequence models such as the Transformer model (Vaswani et al., 2017). These models have demonstrated promising progress in producing high-quality summaries that can be modulated according to specific requirements (Fan et al., 2018; Wu et al., 2021; Amplayo et al., 2021). Additionally, other techniques also have been proposed to enhance the controllability of the summaries, such as conditional generation (He et al., 2022; Luo et al., 2022),
prompt-based summarization (Yang et al., 2022; Liu et al., 2022a; Zhang and Song, 2022), and multi-task learning (Cui and Hu, 2021; Gu et al.,
2022).
## 2.2 Text Style Transfer
Text style transfer is a task that involves transforming an input sentence into a desired style while retaining its style-independent semantics (Jin et al., 2022; Zhu et al., 2021; Dai et al., 2019; Li et al., 2020; Babakov et al., 2022; Mir et al., 2019; Ramesh Kashyap et al., 2022; Tokpo and Calders, 2022). To achieve this, prior research has examined sequence-to-sequence learning strategies that utilize parallel corpora with paired source/target sentences in different styles (Cheng et al., 2020; Hu et al., 2021; Nouri, 2022). Owing to the considerable demand for human resources and material investments in data labeling, parallel data across diverse styles are scarce. This has led to an increased interest in exploring more pragmatic situations where only non-parallel stylized corpora are accessible (Malmi et al., 2020; Reif et al., 2022).
## 2.3 Chatgpt
ChatGPT2is a large language model (LLM), which is built upon the innovations and improvements of its predecessors, such as GPT-33. In terms of training strategies, ChatGPT employs instruction learning and reinforcement learning from human feedback (RLHF; Ouyang et al., 2022) to enhance its overall performance and adaptability.
Upon its emergence, ChatGPT has garnered considerable attention from researchers, who have undertaken initial studies into the model. Scholars such as Baidoo-Anu and Owusu Ansah (2023); Rudolph et al. (2023); West (2023); Sobania et al.
(2023); Gilson et al. (2023); Lai et al. (2023); Wang et al. (2023b) have explored the notable strengths of ChatGPT from the fields of education, science, programming, healthcare, and text generation, respectively. However, Bang et al. (2023a) discovered that ChatGPT suffers from hallucination issues in the context of logical reasoning. Due to its immense and inaccessible training corpus and parameters, and the inability to access external knowledge for reliable sources of support, it is imperative to question whether ChatGPT demonstrates the same hallucination issue as other LLMs when performing sentence generation. Based on these clues, we firmly assert that in-depth analysis of the text generated by ChatGPT and its behavioral patterns are both significant and valuable, and can provide meaningful insights to the readers of this paper.
2https://openai.com/blog/chatgpt 3https://openai.com/research/instruction-following
## 3 Study On Controllable Summarization 3.1 Prompt Formulation
In this section, our main objective is to test the zero-shot performance of ChatGPT on controllable summarization, with the goal to generate summaries for laymen vs. experts. To this end, we constructed several prompts as natural language instructions for ChatGPT. The prompts we tested include for the layman style: Please give me a layman / simple / simplified and understandable
/ easy-to-comprehend / straightforward / general audience summary *of X*, where X was replaced by the source text that should be summarized. Similarly, for the expert summary, we experimented with the prompts: Please give me an expert / a technical / comprehensive and detailed / difficultto-comprehend / in-depth / complicated *summary* of X.
## 3.2 Experimental Setup
For all experiments, we used ChatGPT gpt-3.5turbo, which was, at the time of experimentation, the best-performing publicly accessible version provided by OpenAI. For the hyper-parameter setting, we set temperature = 0, top p = 1, frequency penalty
= 0.2, and presence penalty = 0.2. For summary generation, we configured the maximum number of generated tokens to 512. The remaining hyperparameters were set to their default values as recommended by OpenAI. It is noteworthy that ChatGPT
has the potential to generate empty responses (i.e.,
empty strings) as the result of network transmission timeouts or API request overloads. Should this arise, we adhere to the established practice of resubmitting the request until ChatGPT provides non-empty responses.
All of our experiments were conducted on the version of ChatGPT between 15 Feb 2023 and 30 Apr 2023 by using the OpenAI's ChatGPT API.4 We should emphasize that to prevent any potential interference from the prior responses, we cleared the conversation history each time we submit a new query to ChatGPT. Unless otherwise specified, we refrained from engaging in any further conversation with ChatGPT to modify its responses.
## 3.3 Dataset
We selected ELIFE (Goldsack et al., 2022) dataset for our experiments. It contains summaries of aca-4https://platform.openai.com/overview demic literature that exhibit varying levels of readability, tailored to suit either expert or non-expert audiences. By means of this dataset, we can examine to what extent ChatGPT can regulate the summary generation process in accordance with the intended target users, and compare its summaries to human summaries.
## 3.4 Metrics
In order to assess automatically whether ChatGPT
summaries substantially differ in terms of their audience design based on the given prompt, we opted for a set of three automatic readability metrics:
Flesch Reading Ease (FRE; Kincaid et al., 1975),
Coleman-Liau Index (CLI; Coleman and Liau, 1975), and Dale-Chall Readability Score (DCR;
Chall and Dale, 1995).
The Flesch Reading Ease (Kincaid et al., 1975)
is a metric that gauges the comprehensibility of a given text. This index relies on the average number of syllables per word and the average number of words per sentence. A higher score signifies an easier-to-understand text. Additionally, the Coleman-Liau Index (Coleman and Liau, 1975)
is a measure of the text's difficulty level, which considers the average number of characters per sentence and the average number of sentences per 100 words. A higher score indicates a more challenging text. The Dale-Chall Readability Score (Chall and Dale, 1995) is computed by comparing the number of complex words in the text with a list of common words. A higher score denotes a more challenging text.
We also employed Rouge scores (Lin, 2004) to evaluate the performance of ChatGPT in the task of text summarization, with the aim of comparing its efficacy against the state-of-the-art model. In order to assess the extent to which the summaries re-use word sequences from the original text, we furthermore evaluated N-gram novelty (See et al., 2017; Gehrmann et al., 2019; Pu et al., 2022). Finally, we quantified inconsistency based on factual consistency checking metric SummaC (Laban et al.,
2022), as well as hallucination checking metric
(Cao et al., 2022; Fischer et al., 2021). SummaC (Laban et al., 2022) uses sentence compression and summarization techniques to extract important information and improve the detection of inconsistencies in NLI models by segmenting documents and aggregating scores. Named entity hallucination
(Fischer et al., 2021) flags potential hallucinations in named entities if they do not match the original sources. We here used BERT semantic similarity, rather than exact matching, when computing the named entities matching.
## 3.5 Results On Controllable Summarization 3.5.1 Effect Of Prompt Formulation
Table 1 illustrates that different prompt versions are somewhat consistent regarding whether the instructions asking for layman summaries actually lead to more readable texts than those asking for expert summaries, with FRE ranging between scores of 31 and 38 for automatically generated layman summaries, and between 28 and 37 for automatically generated expert summaries. Conversely, humanwritten summaries exhibit very large differences according to the automatic metrics, with FRE of 53.1 for layman summaries and 22.5 for expert summaries. Similar effects are observed for the CLI and DCR measures. This preliminary test was conducted on a subset of the ELIFE dataset, containing merely 500 random samples; for the rest of the tests, we proceeded to the entire dataset, selecting the prompts asking for "layman" and "expert" summaries, as responses for these prompts seemed to align with the right direction wrt. the readability measures.
| Prompt version | FRE | CLI | DCR |
|----------------------------|--------|--------|--------|
| layman | 37.26† | 14.82† | 11.21† |
| simple | 31.92† | 15.70† | 11.54† |
| simplified and understand. | 35.48† | 15.17† | 11.21† |
| easy-to-comprehend | 36.59† | 14.93† | 11.32† |
| straightforward | 31.74† | 15.58† | 11.42† |
| general audience | 35.86† | 14.98† | 10.96† |
| human answer (for layman) | 53.06 | 12.36 | 8.90 |
| expert | 29.89† | 15.91† | 11.88† |
| technical | 36.65† | 13.76† | 12.20† |
| comprehensive and detailed | 31.62† | 15.47† | 11.15† |
| difficult-to-comprehend | 28.95† | 16.14† | 11.71† |
| in-depth | 34.37† | 14.93† | 10.82† |
| complicated | 29.05† | 15.76† | 11.40† |
| human answer (for expert) | 22.54 | 17.65 | 11.79 |
Table 1: Reading difficulty on different prompts, tested on a set of 500 randomly selected items from the dataset.
†indicates statistical significance (p<0.05) against corresponding human answers via paired t-test.
## 3.5.2 Reading Difficulty Control
Table 2 corroborates that the results of the whole dataset are consistent with the findings from the smaller sample. We conclude that ChatGPT can produce summaries with different levels of reading difficulty to a certain extent based on the provided prompts. Notably, ChatGPT-generated sentences for expert-style summaries show greater complexity than those for layman-style summaries. However, the magnitude of the difference in the reading difficulty scores between the two types of summaries is considerably smaller than that observed in human-written summaries.
Table 2: Reading difficulty scores by automatic metrics;
†and ‡indicate statistical significance (p<0.05) against same-style human answers, and opposite-style ChatGPT
answers via paired t-test, respectively.
| Candidate | FRE | CLI | DCR |
|----------------|---------|---------|---------|
| Human Layman | 52.42 | 12.46 | 8.93 |
| Human Expert | 23.20 | 17.62 | 11.78 |
| ChatGPT Layman | 37.38†‡ | 14.78†‡ | 11.17†‡ |
| ChatGPT Expert | 30.38†‡ | 15.82†‡ | 11.85†‡ |
## 3.5.3 Comparison To Previous Sota Model
We also compared summaries generated by ChatGPT to a previous state-of-the-art (SOTA) neural fine-tuned summarization model (Pu et al., 2023).
On the same test split, the summaries produced by ChatGPT reached Rouge-1=25.53, Rouge-2=5.48, Rouge-L=13.30 under unsupervised learning, and Rouge-1=47.88, Rouge-2=13.75, Rouge-L=42.44 in few-shot learning use the training samples from the same subset of Section 3.5.1, while the model by Pu et al. (2023) reached Rouge-1=48.70, Rouge2=14.84, and Rouge-L=46.13.
## 3.5.4 Disparities In Summarization Behavior
We next examined whether ChatGPT and Humans are consistent with each other regarding the readability of summarization with respect to different items - it could be possible, that some texts simply lead to less readable summaries than others. However, we discovered that Pearson correlations of FRE scores for summaries by humans and ChatGPT were only 0.31 for expert summaries, and 0.2 for layman summaries. (Scores were similarly low for the CLI and DCR metrics.) In addition, the statistical significance test elucidates the noteworthy divergence between the distinctive response styles produced by ChatGPT and the analogous styles of human-generated answers.
Following this, we contrasted the n-gram novelty of human vs. ChatGPT summaries wrt. the original texts. Figure 1 reveals that a significantly higher 4 number of novel 4-grams are present in humanwritten summaries, particularly those aimed at laymen. This suggests that ChatGPT summaries are slightly more extractive compared to human summaries.
![4_image_1.png](4_image_1.png)
## 3.5.5 Inconsistencies And Hallucinations
Given that ChatGPT has previously been reported to generate misinformation, we sought to evaluate its risk of hallucinating on our specific task.
Figure 2 demonstrates that the SummaC consistency scores are lower for ChatGPT-generated summaries than for human-written summaries. A corresponding phenomenon is verified in the hallucination assessment. Precision scores provided in Table 3 demonstrates the extent to which ChatGPTgenerated text contains named entities that are absent in the source text. A lower precision score suggests that the generated text has more named entities that lack support in the source text. The recall scores reflect the ability of ChatGPT to capture named entities from the source text. A lower recall score implies that ChatGPT has missed a considerable number of named entities from the source text. F1 score represents the harmonic mean of the precision and recall scores. By examining Table 3, our findings demonstrate that ChatGPT generates a greater number of named entities that are not present in the source text after undergoing multiple iterations of text conversions and modification. For example, in an expert summary, ChatGPT misinterpreted the meaning of "Geocode" as "regional regulations".
## 3.6 Intermediary Discussion
Our experiments show that ChatGPT-generated summaries do not adapt as strongly to the target audience as human-authored summaries. One pos-
![4_image_0.png](4_image_0.png)
| Candidate | Precision | Recall | F1 |
|----------------|-------------|----------|--------|
| Human Layman | 0.78 | 0.63 | 0.70 |
| Human Expert | 0.92 | 0.61 | 0.73 |
| ChatGPT Layman | 0.75‡ | 0.47† | 0.58† |
| ChatGPT Expert | 0.90‡ | 0.49† | 0.63† |
| ChatGPT L2E2L | 0.74‡ | 0.39†‡ | 0.51†‡ |
| ChatGPT E2L2E | 0.88‡ | 0.47†‡ | 0.62†‡ |
sible reason could be that ChatGPT, given the zeroshot setting, had no way to "know" how strongly the texts should be adapted to the target style. Furthermore, we identified evidence for potential hallucinations generated during summarization. We, therefore, carried out two post-hoc experiments:
(1) We modified the prompt to include an example from the dataset, so ChatGPT would have a chance to know the expected level of text adaptation. (2)
We subjected the resulting summaries to several re-writing steps and test whether this further intensifies the occurrence of hallucinations.
## 3.6.1 Follow-Up Experiment: Example Inclusion In Prompt
We experimented with prompts that also include a human summary example. Unlike the previous few-shot learning experiment, we do not adjust the parameters of the ChatGPT, but just let the model perform unsupervised reasoning through the contents of the prompt. We observe (see Appendix Table 7) that when guided by a human example from the dataset, the summaries generated by ChatGPT indeed tend to be more aligned with human performance, particularly on the Flesch Reading Ease metric (49.23 for layman, 28.88 for expert summaries). However, no significant changes are detected in other metrics. The degree of control over the summarization style has increased, yet it remains inferior to human capabilities.
## 3.6.2 Follow-Up Experiment: Repeated Re-Writing
Summaries are further re-written based on the prompt Please give me a layman/**expert** style version of X, where X was the previously generated summary. Figure 2 and Table 3 display the performance of ChatGPT after re-writing in the entries "ChatGPT L2E2L" and "ChatGPT E2L2E"
which stand for the order in which instructions were given (L stands for layman, and E for expert).
The examinations point out that misinformation and hallucinations may be further increased during subsequent rewriting (lower SummaC scores, lower values in the named entity hallucination metric).
## 4 Study On Text Formality Transfer 4.1 Prompt Formulation And Experimental Setup
Our subsequent set of experiments investigates ChatGPT's capacity for style transfer concerning language formality. Our prompt for this task was formulated as Please give me a **formal** / an **informal** *version* of X. We utilized the same experimental setup as for the summarization task; however, we restricted the maximum number of generated tokens to 32. We again experimented with various prompts, as shown in Table 4 below. Unless otherwise specified, all experiments used the same configuration.
## 4.2 Dataset
We investigated whether ChatGPT can proficiently execute style transfer on sentences using data from the GYAFC (Rao and Tetreault, 2018) dataset. The dataset has two branches, Entertainment & Music
(EM) and Family & Relationships (FR). With the aid of this dataset, we aim to evaluate ChatGPT's ability for sentence style transfer, examine the differences in vocabulary selection and syntactic structures between ChatGPT and human performance, and identify the limitations of ChatGPT.
## 4.3 Metrics
To evaluate the level of formality in the generated text, we utilized Text Formality Score (Heylighen and Dewaele, 1999) and MTLD Lexical Diversity
(McCarthy and Jarvis, 2010) metric. The Text Formality Score (Heylighen and Dewaele, 1999) is a metric that quantifies the degree of formality in language usage within a text, based on the adherence to formal linguistic norms. Another measure that evaluates language formality is the MTLD Lexical Diversity metric (McCarthy and Jarvis, 2010).
This index measures the diversity and richness of the vocabulary used in the text, based on the frequency and number of unique words. A higher MTLD score indicates a greater variety of vocabulary, which typically corresponds to a more formal language style. We also utilized BLEU (Papineni et al., 2002) score to draw a comparison between ChatGPT and SOTA approach. We additionally assessed the distribution of POS tags in the generated different styles, as well as the distribution of dependency labels5. For quantifying misinformation and hallucinations, we used DAE and named entity hallucination checking. The DAE algorithm (Goyal and Durrett, 2020) utilizes dependency arcs to identify entailment relationships between propositions and identify inconsistencies in factual information based on syntactic and semantic structures.
## 4.4 Results On Formality Control 4.4.1 Effect Of Prompt Formulation
Table 4 presents the results for a set of 500 random samples from the GYAFC dataset. We observe that the Formality scores are very similar for ChatGPT
formal vs. informal texts. We note however that the difference in ratings for human-written texts is also small for this metric. The MTLD metric on the other hand shows higher values for ChatGPTgenerated formal texts; in fact, the scores are substantially larger than those of human-written texts, but differ not much from each other. We therefore proceed with the prompts using the formulation formal/informal for the rest of the experiments on the whole dataset.
## 4.4.2 Sentence Formality Control
Table 5 offers supplementary evidence from the full dataset supporting ChatGPT's capacity to modify the formality level of sentences. By employing the Formality indicator (Heylighen and Dewaele, 1999), it is apparent that the generated text tends to manifest a higher level of formality overall. A
primary factor contributing to this result is the pre-5https://spacy.io/
| Prompt version | Formality | MTLD |
|---------------------------------------------------------|-------------|--------|
| informal | 51.09 | 13.22† |
| unprofessional | 51.20 | 16.23† |
| spoken version | 51.30† | 14.47† |
| easygoing | 51.43† | 14.11† |
| casual | 51.00 | 16.30† |
| laid-back | 51.27 | 13.94† |
| human answer (for informal) | 50.76 | 11.42 |
| formal | 52.22† | 31.23† |
| professional | 51.96† | 31.98† |
| written | 51.62† | 29.69† |
| stately | 51.30† | 34.43† |
| grandiose | 52.85† | 30.71† |
| majestic | 52.23† | 33.49† |
| human answer (for formal) | 53.92 | 14.99 |
| Table 4: Text formality on different prompts, tested on | | |
Table 4: Text formality on different prompts, tested on a set of 500 randomly selected items from the dataset. †
indicates statistical significance (p<0.05) against corresponding human answers via paired t-test.
disposition of ChatGPT's training corpus towards written sources, encompassing materials such as books and news articles, as opposed to spoken language corpora (OpenAI, 2023). This perspective is further corroborated by an examination of the generated sentence samples. The MTLD metric underscores that ChatGPT's lexical diversity is considerably lower when generating informal sentences, but shows a marked increase when generating formal sentences.
| Dataset | Candidate | Formality | MTLD | |
|------------------|----------------|----------------|--------|-------|
| GYAFC-FR | Human Informal | 49.87 | 15.20 | |
| Human Formal | 53.57 | 18.70 | | |
| ChatGPT Informal | 50.77†‡ | 14.60‡ | | |
| ChatGPT Formal | 52.06†‡ | 31.68†‡ | | |
| GYAFC-EM | | Human Informal | 50.11 | 12.11 |
| Human Formal | 53.76 | 15.82 | | |
| ChatGPT Informal | 51.02†‡ | 12.01‡ | | |
| ChatGPT Formal | 51.98†‡ | 29.80†‡ | | |
Table 5: Text formality scores by automatic metrics; †
and ‡indicate statistical significance (p<0.05) against same-style human answers, and opposite-style ChatGPT
answers via paired t-test, respectively.
## 4.4.3 Comparison To Previous Sota Model
We also find that ChatGPT outperforms the previous supervised SOTA model (Nouri, 2022) by training on the same subset at Section 4.4.1 for few-shot learning, as evident from the higher BLEU score. Specifically, ChatGPT yields superior scores of 0.711 and 0.697 in the EM and FR branches, as compared to the SOTA model's scores of 0.671 and 0.652. However, ChatGPT achieved only 0.07 and 0.06 BLEU scores on the EM and FR branches, respectively, in the unsupervised setting.
## 4.4.4 Effect Of Example Inclusion In Prompt
We again examined the impact of including an example of the dataset into the prompt and find that this again helps ChatGPT slightly with matching the dataset style (with details provided in Table 8).
Specifically, the formality score for the informal style is 50.67, while it climbs to 52.13 for the formal style, with the MTLD score also displaying an increase from 14.81 for informal texts to 19.22 for formal texts.
## 4.4.5 Disparities In Style Transfer Behavior
In terms of controlling the formality of sentence style, ChatGPT's performance still exhibits significant differences compared to human behavior.
While the by-item correlation is slightly higher for this dataset than for the summary task (Pearson correlation of around 0.4 for formal style and 0.5 for informal style on the Formality metric; 0.3 for MTLD measure), there are interesting disparities between the distributions of POS tags between ChatGPT and humans. The examination of statistical significance further substantiates our antecedent observation, indicating a substantial disparity between the different response styles engendered by the model, as well as between the answers conforming to the same styles exhibited by humans.
Figure 3 illustrates the absolute differences in the distribution of Part-of-Speech (POS) tags. Based on this figure, it is evident that ChatGPT employs a higher frequency of adjectives, adpositions, determiners, and nouns in the generation of formal sentences when compared to those produced by human writers. Conversely, in the generation of informal sentences, ChatGPT tends to utilize more auxiliary words and punctuation marks. These variances in word choice between formal and informal styles, as exemplified by ChatGPT, are indicative of differences in its selected vocabulary for distinct stylistic modes compare with humans.
By analyzing the distribution of dependency labels (Appendix Figures 5, 6, 7, 8), it is also clear that, in comparison to human-authored sentences, ChatGPT utilizes a greater frequency of adjectival modifiers, auxiliaries, determiners, objects of the preposition, and prepositional modifiers for formal sentences. Contrarily, compounds and dependents are infrequently employed in the generation of informal sentences by ChatGPT.
![7_image_0.png](7_image_0.png)
Informal Style ADJ ADP ADV AUX CCONJ DET INTJNOUNNUMPARTPRON PROPNPUNCTSCONJSPACESYMVERB POS Tags 0.000 0.025 ADJADPADVAUX CCONJDETINTJNOUNNUMPARTPRON PROPNPUNCTSCONJSPACESYMVERB POS Tags 0.025 0.000 Formal Style
![7_image_1.png](7_image_1.png)
X
X
## 4.4.6 Inconsistencies And Hallucinations
In order to assess the risk of introducing erroneous information when ChatGPT performs sentence style transformation, we employed DAE (Goyal and Durrett, 2020) at the sentence level to examine the factuality after text style transformation, and compare again the effect of multiple re-writes.
Similar to before, F denotes formal style, I signifies informal style, and X2X2X (X ∈ {F, I}) represents multiple rewriting transformations of the text. The outcomes of our inquiry are depicted in Figure 4, and Appendix Figure 14. We also again scrutinized the potential incorporation of hallucinatory information regarding named entities in the ChatGPTgenerated text, and the findings are presented in Appendix Table 9.
![7_image_2.png](7_image_2.png)
Upon conducting factuality checking (see Figure 4, and Appendix Figure 14), it is discovered that ChatGPT's performance is inferior to that of humans in sentence-style rewriting. Interestingly, with the increase in the number of text conversions and rewritings, ChatGPT's tendency to commit factual errors escalates while the output increasingly deviates from the original text, compromising the fidelity of the final result. In a particular instance, the human-generated formal expression states "She is a poor vocalist", whereas the formal rendition provided by ChatGPT articulates "She does not possess the ability to sing". This discrepancy represents a significant semantic alteration. The degree of dependency arc entailment is low in this case. Similarly, Appendix Table 9 reveals that recall scores on the named entity hallucination metric are lower in ChatGPT sentences than in human sentences.
## 4.4.7 Qualitative Examples
To explore whether ChatGPT-generated sentences significantly alter the original semantics of the input text, we conducted a case study by randomly selecting 15 samples from each branch of the GYAFC
dataset. Our findings indicate that ChatGPT poses a relatively severe risk of modifying the original semantics during sentence style transformation, with approximately 18% of the samples exhibiting noticeable semantic inconsistencies. The examples in Table 6 reveal that during the process of sentence style transfer, ChatGPT erroneously modifies the content words, resulting in significant semantic alterations.
| Formal to Informal It is such a waste of TV space. | (Reference) |
|----------------------------------------------------------------------|---------------|
| Yes, because it's such a waste of TV space! | (Human) |
| What a total waste of TV time! | (ChatGPT) |
| The other boy isn't that great. | (Reference) |
| The other boy is not that good. | (Human) |
| The other kid's not so hot. | (ChatGPT) |
| I really enjoy how the composition has the tec... | (Reference) |
| I really like how they do like the whole techn... | (Human) |
| I'm diggin' how the techno beat slows down in ... | (ChatGPT) |
| Informal to Formal Fatboy Slim - Right Here, Right Now Or any oth... | (Reference) |
| Fatboy Slim is right here and now. He Rocks! | (Human) |
| Fatboy Slim's "Right Here, Right Now" is an ex... | (ChatGPT) |
| loved them since their first album. | (Reference) |
| I have loved them since their first album. | (Human) |
| I have held a fondness for them since the rele... | (ChatGPT) |
| if u occasionally doing it then u alrady r add... | (Reference) |
| If you occasionally do it, then you are already... | (Human) |
| If you are engaging in the activity on a regul... | (ChatGPT) |
| Table 6: Case study of ChatGPT generated output | |
Furthermore, our examination of the visualized dependency tree (see Appendix Figures 11, 12, and 13), which relies primarily on the dependency arc entailment (DAE) algorithm for fact-checking, reveals that the text generated by ChatGPT contains a higher number of dependency arcs lacking support from the original text, when compared to human responses.
## 5 Conclusion
This paper presents a broad assessment of ChatGPT's proficiency in generating controllable text.
We conducted quantitative and qualitative examinations at the document level (summarization task)
and sentence level (text style transfer). The empirical findings show that ChatGPT outperforms the previous state-of-the-art models on automatic metrics, but that there are substantial disparities between its generated texts and human-written texts.
These disparities are reduced by providing a target example of the human writing style. Furthermore, our investigations also confirm the previously reported problems of hallucinations and inaccuracies in text generated by ChatGPT.
## 6 Limitations
The primary limitations of the current study pertain to the selection of prompts and evaluation metrics.
The experimental cost of requesting API responses from OpenAI to assess ChatGPT's text generation abilities imposes significant constraints on our choice of datasets. Therefore, we have to limit our experimentation to only two related controllable text generation datasets. While we have evaluated ChatGPT's performance at both the document and sentence levels, we cannot extrapolate that ChatGPT has similar performance for other text generation datasets. Additionally, the experimental cost prohibits us from conducting traversal experiments on the selection of hyperparameters. We relied on the default configuration recommended by OpenAI,
and we maintain consistency in all hyperparameters to ensure the fairness of the experiments.
Secondly, although we have studied the impact of prompt engineering on ChatGPT, the selection of prompts is mainly affected by human understanding, and the number of potential prompts is infinite.
Hence, we cannot guarantee whether other prompts that we did not select will yield the same conclusions as our experiment. Furthermore, ChatGPT is subject to continuous updates and iterations, which may lead to improved performance, making it difficult to predict if future versions of ChatGPT will have similar results to our experiments.
Finally, to select appropriate evaluation metrics, we have included both domain-related evaluation metrics (such as reading difficulty and text formality) and domain-independent evaluation indicators
(such as fact-checking and hallucination detection).
However, we acknowledge that the automatic metrics may sometimes not capture all aspects of the intended construct correctly.
## 7 Ethics Considerations
All datasets utilized in this study are publicly available, and we have adhered to ethical considerations by not introducing any additional information into ChatGPT's inputs.
## Acknowledgements
This project has received funding from the European Research Council (ERC) under the European
![8_image_0.png](8_image_0.png)
Union's Horizon 2020 Research and Innovation Programme (Grant Agreement No. 948878).
## References
Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021. Aspect-controllable opinion summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6578–6593, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Tal August, Katharina Reinecke, and Noah A. Smith.
2022. Generating scientific definitions with controllable complexity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8298–8317, Dublin, Ireland. Association for Computational Linguistics.
Nikolay Babakov, David Dale, Varvara Logacheva, and Alexander Panchenko. 2022. A large-scale computational study of content preservation measures for text style transfer and paraphrase generation. In *Proceedings of the 60th Annual Meeting of the Association for* Computational Linguistics: Student Research Workshop, pages 300–321, Dublin, Ireland. Association for Computational Linguistics.
David Baidoo-Anu and Leticia Owusu Ansah. 2023. Education in the era of generative artificial intelligence
(ai): Understanding the potential benefits of chatgpt in promoting teaching and learning. Available at SSRN 4337484.
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. 2023a. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. *ArXiv*, abs/2302.04023.
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023b. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. *arXiv* preprint arXiv:2302.04023.
Meng Cao, Yue Dong, and Jackie Cheung. 2022. Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3340–3354, Dublin, Ireland. Association for Computational Linguistics.
Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise style transfer: A new task towards better communication between experts and laymen. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1061–1071, Online. Association for Computational Linguistics.
Fredrik Carlsson, Joey Öhman, Fangyu Liu, Severine Verlinden, Joakim Nivre, and Magnus Sahlgren. 2022.
Fine-grained controllable text generation using nonresidual prompting. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6837–
6857, Dublin, Ireland. Association for Computational Linguistics.
Jeanne Sternlicht Chall and Edgar Dale. 1995. *Readability revisited: The new Dale-Chall readability formula*.
Brookline Books.
Hou Pong Chan, Lu Wang, and Irwin King. 2021. Controllable summarization with constrained Markov decision process. *Transactions of the Association for* Computational Linguistics, 9:1213–1232.
Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, and Jingjing Liu. 2020. Contextual text style transfer. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2915–
2924, Online. Association for Computational Linguistics.
Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring.
Journal of Applied Psychology, 60(2):283.
Peng Cui and Le Hu. 2021. Topic-guided abstractive multi-document summarization. In Findings of the Association for Computational Linguistics: EMNLP
2021, pages 1463–1472, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Zihao Wu, Lin Zhao, Wei Liu, Ninghao Liu, Sheng Li, Dajiang Zhu, et al. 2023. Chataug: Leveraging chatgpt for text data augmentation. arXiv preprint arXiv:2302.13007.
Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5997–
6007, Florence, Italy. Association for Computational Linguistics.
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019a. Plug and play language models: A simple approach to controlled text generation.
arXiv preprint arXiv:1912.02164.
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019b. Plug and play language models: A simple approach to controlled text generation.
ArXiv, abs/1912.02164.
Sam Earle, Maria Edwards, Ahmed Khalifa, Philip Bontrager, and Julian Togelius. 2021. Learning controllable content generators. In 2021 IEEE Conference on Games (CoG), pages 1–9. IEEE.
Angela Fan, David Grangier, and Michael Auli. 2018.
Controllable abstractive summarization. In *Proceedings of the 2nd Workshop on Neural Machine Translation and Generation*, pages 45–54, Melbourne, Australia. Association for Computational Linguistics.
T. Fischer, C. Biemann, Informatik und Naturwissenschaften Universität Hamburg Fakultät für Mathematik, and Universität Hamburg Fachbereich Informatik. 2021. Finding Factual Inconsistencies in Abstractive Summaries. Universität Hamburg.
Sebastian Gehrmann, Zachary Ziegler, and Alexander Rush. 2019. Generating abstractive summaries with finetuned language models. In *Proceedings of the* 12th International Conference on Natural Language Generation, pages 516–522, Tokyo, Japan. Association for Computational Linguistics.
Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash, et al. 2023. How does chatgpt perform on the united states medical licensing examination? the implications of large language models for medical education and knowledge assessment. *JMIR*
Medical Education, 9(1):e45312.
Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022. Making science simple: Corpora for the lay summarisation of scientific literature.
In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 10589–10604, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment.
In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3592–3603, Online.
Association for Computational Linguistics.
Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, and Bing Qin. 2022. A distributional lens for multi-aspect controllable text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1023–1043, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Junxian He, Wojciech Kryscinski, Bryan McCann, Nazneen Rajani, and Caiming Xiong. 2022. CTRLsum: Towards generic controllable text summarization. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*,
pages 5879–5915, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Francis Heylighen and Jean-Marc Dewaele. 1999. Formality of language: definition, measurement and behavioral determinants. Interner Bericht, Center "Leo Apostel", Vrije Universiteit Brüssel, 4.
Zhiqiang Hu, Roy Ka-Wei Lee, and Charu C. Aggarwal. 2021. Syntax matters! syntax-controlled in text style transfer. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 566–575, Held Online. INCOMA Ltd.
Zhiting Hu and Li Erran Li. 2021. A causal lens for controllable text generation. *Advances in Neural* Information Processing Systems, 34:24941–24955.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. 2023. Is chatgpt a good translator? a preliminary study. *arXiv preprint* arXiv:2301.08745.
Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2022. Deep learning for text style transfer: A survey. *Computational Linguistics*,
48(1):155–205.
Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A
conditional transformer language model for controllable generation. *ArXiv*, abs/1909.05858.
J Peter Kincaid, Robert P Fishburne Jr, Richard L
Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, Naval Technical Training Command Millington TN Research Branch.
Sachin Kumar, Biswajit Paria, and Yulia Tsvetkov. 2022.
Gradient-based constrained sampling from language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2251–2277, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLIbased models for inconsistency detection in summarization. *Transactions of the Association for Computational Linguistics*, 10:163–177.
Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023. Chatgpt beyond english: Towards a comprehensive evaluation of large language models in multilingual learning. *arXiv* preprint arXiv:2304.05613.
Xiang Li, John Thickstun, Ishaan Gulrajani, Percy S
Liang, and Tatsunori B Hashimoto. 2022. Diffusionlm improves controllable text generation. *Advances* in Neural Information Processing Systems, 35:4328–
4343.
Xiao Li, Guanyi Chen, Chenghua Lin, and Ruizhe Li.
2020. DGST: a dual-generator network for text style transfer. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 7131–7136, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Xiaochen Liu, Yang Gao, Yu Bai, Jiawei Li, Yinan Hu, Heyan Huang, and Boxing Chen. 2022a. PSP:
Pre-trained soft prompts for few-shot abstractive summarization. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 6355–6368, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Yizhu Liu, Qi Jia, and Kenny Zhu. 2022b. Length control in abstractive summarization by pretraining information selection. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6885–
6895, Dublin, Ireland. Association for Computational Linguistics.
Zheheng Luo, Qianqian Xie, and Sophia Ananiadou.
2022. Readability controllable biomedical document summarization. In *Findings of the Association for* Computational Linguistics: EMNLP 2022, pages 4667–4680, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Eric Malmi, Aliaksei Severyn, and Sascha Rothe. 2020.
Unsupervised text style transfer with padded masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8671–8680, Online. Association for Computational Linguistics.
Philip M McCarthy and Scott Jarvis. 2010. Mtld, vocdd, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. *Behavior* research methods, 42(2):381–392.
Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,
Volume 1 (Long and Short Papers), pages 495–504, Minneapolis, Minnesota. Association for Computational Linguistics.
Nasim Nouri. 2022. Text style transfer via optimal transport. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2532–2541, Seattle, United States.
Association for Computational Linguistics.
OpenAI. 2023. Gpt-4 technical report. *ArXiv*,
abs/2303.08774.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *Advances in Neural* Information Processing Systems, 35:27730–27744.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, and Dacheng Tao. 2023. Towards making the most of chatgpt for machine translation. *Available at SSRN*
4390455.
Dongqi Pu, Xudong Hong, Pin-Jie Lin, Ernie Chang, and Vera Demberg. 2022. Two-stage movie script summarization: An efficient method for low-resource long document summarization. In Proceedings of The Workshop on Automatic Summarization for Creative Writing, pages 57–66, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Dongqi Pu and Khalil Sima'an. 2022. Passing parser uncertainty to the transformer: Labeled dependency distributions for neural machine translation. In *Proceedings of the 23rd Annual Conference of the European Association for Machine Translation*, pages 41–50, Ghent, Belgium. European Association for Machine Translation.
Dongqi Pu, Yifan Wang, and Vera Demberg. 2023. Incorporating distributions of discourse structure for long document abstractive summarization. *arXiv* preprint arXiv:2305.16784.
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is chatgpt a general-purpose natural language processing task solver? *arXiv preprint arXiv:2302.06476*.
Abhinav Ramesh Kashyap, Devamanyu Hazarika, MinYen Kan, Roger Zimmermann, and Soujanya Poria.
2022. So different yet so alike! constrained unsupervised text style transfer. In *Proceedings of the 60th* Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 416–431, Dublin, Ireland. Association for Computational Linguistics.
Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129–140, New Orleans, Louisiana. Association for Computational Linguistics.
Emily Reif, Daphne Ippolito, Ann Yuan, Andy Coenen, Chris Callison-Burch, and Jason Wei. 2022. A recipe for arbitrary text style transfer with large language models. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 2: Short Papers), pages 837–848, Dublin, Ireland. Association for Computational Linguistics.
Jürgen Rudolph, Samson Tan, and Shannon Tan. 2023.
Chatgpt: Bullshit spewer or the end of traditional assessments in higher education? *Journal of Applied* Learning and Teaching, 6(1).
Ritesh Sarkhel, Moniba Keymanesh, Arnab Nandi, and Srinivasan Parthasarathy. 2020. Interpretable multiheaded attention for abstractive summarization at controllable lengths. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 6871–6882, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Chenhui Shen, Liying Cheng, Lidong Bing, Yang You, and Luo Si. 2022a. SentBS: Sentence-level beam search for controllable summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10256–10265, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, and Luo Si. 2022b. MReD: A meta-review dataset for structure-controllable text generation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2521–2535, Dublin, Ireland. Association for Computational Linguistics.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. *arXiv preprint arXiv:2303.17580*.
Dominik Sobania, Martin Briesch, Carol Hanna, and Justyna Petke. 2023. An analysis of the automatic bug fixing performance of chatgpt. arXiv preprint arXiv:2301.08653.
Ewoenam Kwaku Tokpo and Toon Calders. 2022. Text style transfer for bias mitigation using masked language modeling. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop*,
pages 163–171, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Jiaan Wang, Yunlong Liang, Fandong Meng, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2023a. Crosslingual summarization via chatgpt. *arXiv preprint* arXiv:2302.14229.
Jiaan Wang, Yunlong Liang, Fandong Meng, Beiqi Zou, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2023b. Zeroshot cross-lingual summarization via large language models.
Colin G West. 2023. Ai and the fci: Can chatgpt project an understanding of introductory physics? *arXiv* preprint arXiv:2303.01067.
Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, and Caiming Xiong. 2021. Controllable abstractive dialogue summarization with sketch supervision. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 5108–5122, Online. Association for Computational Linguistics.
Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Mingfeng Xue, Boxing Chen, and Jun Xie.
2022. Tailor: A prompt-based approach to attributebased controlled text generation. *arXiv preprint* arXiv:2204.13362.
Xianjun Yang, Yan Li, Xinlu Zhang, Haifeng Chen, and Wei Cheng. 2023a. Exploring the limits of chatgpt for query or aspect-based text summarization. arXiv preprint arXiv:2302.08081.
Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. 2023b.
Mm-react: Prompting chatgpt for multimodal reasoning and action. *arXiv preprint arXiv:2303.11381*.
Hanqing Zhang and Dawei Song. 2022. DisCup: Discriminator cooperative unlikelihood prompt-tuning for controllable text generation. In *Proceedings of* the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3392–3406, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Hanqing Zhang, Haolin Song, Shaoyu Li, Ming Zhou, and Dawei Song. 2022. A survey of controllable text generation using transformer-based pre-trained language models. *arXiv preprint arXiv:2201.05337*.
Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, et al. 2023. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. *arXiv preprint arXiv:2302.09419*.
Wanrong Zhu, Xin Wang, Tsu-Jui Fu, An Yan, Pradyumna Narayana, Kazoo Sone, Sugato Basu, and William Yang Wang. 2021. Multimodal text style transfer for outdoor vision-and-language navigation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1207–1221, Online. Association for Computational Linguistics.
## A Appendix: One-Shot Guidance
B **Appendix: Absolute Differences in POS**
and Dependency Label Distributions
| Candidate | FRE | CLI | DCR |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|-------|-------|
| Document: {Original Document}, Layman Summary: {Human Layman Summary}. Please learn the way of summarization from the previous example, and give me a layman-style summary of X | 49.23† 13.26† 10.45† | | |
| Human Answer | 52.42 | 12.46 | 8.93 |
| Document: {Original Document}, Expert Summary: {Human Expert Summary}. Please learn the way of summarization from the previous example, and give me an expert-style summary of X | 28.88† 15.92† | 11.82 | |
| Human Answer | 23.20 | 17.62 | 11.78 |
Table 7: Reading difficulty of one-shot guidance. †indicates statistical significance (p<0.05) against corresponding human answers via paired t-test.
| Candidate | Formality | MTLD |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|--------|
| Formal: {Formal Sentence}, Informal: {Informal Sentence}. Please learn the way of formality conversion from the previous example, and give me an informal version of X | 50.67† | 14.81 |
| Human Answer | 49.87 | 15.20 |
| Informal: {Informal Sentence}, Formal: {Formal Sentence}. Please learn the way of formality conversion from the previous example, and give me a formal version of X | 52.13† | 19.22 |
| Human Answer | 53.57 | 18.70 |
Table 8: Text formality of one-shot guidance on GYAFC-FR branch. †indicates statistical significance (p<0.05)
against corresponding human answers via paired t-test.
![13_image_0.png](13_image_0.png)
Figure 5: Absolute differences in dependency labels distribution of ChatGPT and human-generated formal style sentences: GYAFC - EM
![13_image_1.png](13_image_1.png)
![14_image_0.png](14_image_0.png)
![14_image_1.png](14_image_1.png)
![15_image_0.png](15_image_0.png)
## C **Appendix: Dependency Arc Entailment**
![16_image_0.png](16_image_0.png)
![16_image_1.png](16_image_1.png)
![16_image_2.png](16_image_2.png)
![17_image_0.png](17_image_0.png)
Dataset Candidate Precision Recall F1
| GYAFC-FR |
|------------|
| GYAFC-EM |
Human Informal 0.989 0.988 0.988
Human Formal 0.988 0.989 0.988
ChatGPT Informal 0.986 0.985 0.986
ChatGPT Formal 0.974 0.974 0.974
ChatGPT I2F2I 0.982 0.982 0.982
ChatGPT F2I2F 0.974 0.973 0.973
Human Informal 0.979 0.987 0.983
Human Formal 0.977 0.989 0.982
ChatGPT Informal 0.975 0.974 0.974
ChatGPT Formal 0.950 0.952 0.951
ChatGPT I2F2I 0.970 0.969 0.970
ChatGPT F2I2F 0.945 0.946 0.945
Table 9: Named entity hallucination - GYAFC |
jia-2023-multi | Multi-Dialectal Representation Learning of Sinitic Phonology | https://aclanthology.org/2023.acl-srw.2 | Machine learning techniques have shown their competence for representing and reasoning in symbolic systems such as language and phonology. In Sinitic Historical Phonology, notable tasks that could benefit from machine learning include the comparison of dialects and reconstruction of proto-languages systems. Motivated by this, this paper provides an approach for obtaining multi-dialectal representations of Sinitic syllables, by constructing a knowledge graph from structured phonological data ,then applying the BoxE technique from knowledge base learning. We applied unsupervised clustering techniques to the obtained representations to observe that the representations capture phonemic contrast from the input dialects. Furthermore, we trained classifiers to perform inference of unobserved Middle Chinese labels, showing the representations{'} potential for indicating archaic, proto-language features. The representations can be used for performing completion of fragmented Sinitic phonological knowledge bases, estimating divergences between different characters, or aiding the exploration and reconstruction of archaic features. | # Multi-Dialectal Representation Learning Of Sinitic Phonology
Zhibai Jia No.2 High School of East China Normal University [email protected]
## Abstract
Machine learning techniques have shown their competence for representing and reasoning in symbolic systems such as language and phonology. In Sinitic Historical Phonology, notable tasks that could benefit from machine learning include the comparison of dialects and reconstruction of proto-languages systems. Motivated by this, this paper provides an approach for obtaining multi-dialectal representations of Sinitic syllables, by constructing a knowledge graph from structured phonological data, then applying the BoxE technique from knowledge base learning. We applied unsupervised clustering techniques to the obtained representations to observe that the representations capture phonemic contrast from the input dialects.
Furthermore, we trained classifiers to perform inference of unobserved Middle Chinese labels, showing the representations' potential for indicating archaic, proto-language features. The representations can be used for performing completion of fragmented Sinitic phonological knowledge bases, estimating divergences between different characters, or aiding the exploration and reconstruction of archaic features.
## 1 Introduction
The evolution of languages in the Sinitic family created intricate correspondences and divergences in its dense dialect clusters. Investigating the dynamics of this evolution, through comparison and proto-language reconstruction, is an essential task to Sinitic Historical phonology. However, it may be costly for researchers to manually probe through the groups in search of phonological hints. Hence, it is desirable to accelerate the process with modern algorithms, specifically, representation learning.
Graph-based machine learning (Makarov et al.,
2021) have gained increasing attention in recent years, due to their versatility with data with flexible structures. Especially, missing link prediction algorithms for knowledge graphs (Wang et al., 2021)
(Zhu et al., 2022) can uncover a latent structure in noisy and incomplete knowledge. In the case for learning phonological representations, using graphbased learning can allow for more comprehensive integration of multi-dialectal evidence. Thus, we propose applying graph-based techniques for multidialectal representation learning. We construct a knowledge graph from the multidialectal phonological data, by abstracting unique phonetic components and individual characters into two kinds of nodes. Then, we connect them with edges specific to the dialect type wherein the character is associated with the given component. On the constructed knowledge graph, we train the BoxE algorithm (Abboud et al., 2020), a Box Embedding Model for knowledge base completion. Finally, we evaluate the obtained representations with unsupervised and supervised clustering, as well as MLP probes based on Middle-Chinese-derived labels, to show this tool's value for Sinitic phonological investigation.
## 2 Background On Sinitic Languages
The analysis of Sinitic languages face a few specific challenges due to unique phonological characteristics. These characteristics define crucial details of our design.
In Sinitic languages, morphemes are primarily monosyllabic. Hence, Chinese writing binds one syllable to each of its glyphs, known as characters.
A syllable in Sinitic can be decomposed into an initial, a final and a tone. (Shen, 2020) Initials refer to the consonant-like sounds at the beginning of a syllable, which include both stops (e.g. /p-/, /b-/)
and fricatives (e.g. /s-/, /S-/). These initials could be combined with various finals to form syllables.
Finals refer to the vowel-like sounds at the end of a syllable, which included both simple vowels (e.g. /- a/, /-i/, /-u/), complex vowels (e.g. /-ai/, /-ao/, /-ei/), and vowels combined with consonant codas (/-m/,/-
n/,/-N/,/-p/,/-t/,/-k/). Tones refer to the pitch patterns 19 associated with syllables in Chinese. Tones could distinguish between words that were otherwise homophonous, and they were an important part of the Chinese phonological system.
Due to the early conception of the Chinese writing system, syllables from different Sinitic languages can usually be aligned to each other through a written form. As this alignment is typically implemented in databases of raw Sinitic data, the difficulty of cognate identification is drastically reduced, facilitating analysis. However, the simple syllable structure introduces large amounts of homophones, words sharing same pronunciations, into Sinitic languages. This hinders the use of the comparative method in reconstructing a Sinitic proto-language. The existence of a supersegmental tone feature also complicates a historical analysis of Sinitic languages.
Figure 1: Highlighting key characteristics of Sinitic relevant to our approach. Characters are the central identity in the multi-dialectal representations. The orthographic alignment of sub-syllable components form the structure of data used in this study.
Two factors that motivate the use of a graphbased method include the uniform structure of Sinitic syllables and their intimate relationship with characters. The intuitive syllable decomposition and the glyph-based alignment inspire viewing the components contextualized in various dialects as different "observations" of a single character. Theoretically, these observations are derivable from the reading of the character in the proto-language.
## 3 Related Work
The practice of computationally-aided protolanguage construction, often associated with cognate identification, has been extensively considered in the past two decades (Nerbonne et al., 2007). Examples include (Steiner et al., 2011) which draws insights from bio-informatics and the classical comparative workflow, and (List et al., 2017), which compared many methods for cognate identification. An relevant insight from the latter paper is that language-specific methods often outperform language-general ones, especially for languages like Sinitic. An epitome of neural methods for proto-language reconstruction would be (Meloni et al., 2021), in which Latin is reconstructed from Romance descendent languages with a encoderdecoder structure. Though, our approach differs from their study in many crucial aspects. In Meloni et al. 2021, the reconstruction is supervised, with the proto-language Latin provided at training time. But our method targets not only documented proto-languages like Middle Chinese, but also unknown, intermediate varieties in the development from ancient Sinitic to modern dialects, which requires an unsupervised approach. Additionally, in term of techniques, their use of GRU and attentionbased transducers contrasts with our emphasis on a graph-based method.
Considering the representation learning of Sinitic, we found abundant literature on the topic of speech recognition (Ma et al., 2022), segmentation and synthesis, which often yield representations of certain phonological relevance as by-product.
Though, these studies devote heavily to a few major languages, specifically Mandarin or Cantonese, and, since they are rarely claim motivation from historical phonology, seldom take a multi-lingual or multi-dialectal approach.
While speech representation learning often serve the aformentioned purposes, the proposals of using neural networks to model phonetics and phonology from either symbolic abstractions or acoustic data in order to examine theories in these fields are relevant to this study. Unsupervised binary stochastic autoencoders were explored in (Shain and Elsner, 2019). GAN (Generative Adversarial Networks)
was used in (Begus, 2020). These proposals modeled perception and categorization, in relation to language acquisation. Most interestingly, representation learning has been applied for discovering phonemic tone contours in tonal languages(Li et al., 2020), of which a great portion are Sinitic Languages. However, these proposals again rarely address issues from historical phonology.
Finally, it should be noted that the concept of transforming porous data in a regular, matrix-like form to a loose, graph-like form for flexibility in processing, while essential to the designs of this paper, is not novel in the literature. Rather, it originates with the GRAPE framework in (You et al.,
2020). Notably, when the data in question concerns Chinese historical phonology, it coincides with Johann-Mattis List's proposals for introducing network methods into computational linguistics and Chinese historical phonology. Generally, this line of work should be considered most relevant to our study (List, 2018; List et al., 2014; List, 2015).
List (2018) approaches issues spanning character formation, Middle Chinese annotation, as well as Old Chinese reconstruction with network methods.
List et al. (2014); List (2015) examines dialect evolution with display graphs, with a focus on the complex word-borrowing dynamics between the dialect families. He calls for colleagues to lend more attention to data-driven, quantitative methods. Our proposal answers List's call by bringing together knowledge graphs with Chinese historical phonology. Furthermore, the utilization of SOTA
representation learning extends beyond the scope of the aforementioned work.
## 4 Method
The graph-based method for representing dialect data has the benefit of making the model more flexible, robust, and efficient at using porous, incomplete data. This is particularly important since investigations into dialects are often uncoordinated, resulting in a large amount of partial character entries, where only some columns have pronunciations while others are missing. It could be argued that we can use missing data imputation to alleviate the issue, and continue processing the dialect data in a matrix form, perhaps with feed-forward neural networks or denoising autoencoders(Vincent et al.,
2008). However, traditional missing-data imputation techniques may create fictitious syllables that violate the phonotactics of that dialect when imputing initials or finals according to the mode of a type.
Conditioning the initials or finals on each other will cause higher-order dependencies that are hard to solve. Therefore, by keeping the spaces untouched and using paired comparisons, the graph formalism circumnavigates the problem. This formulation may also allow for auxiliary input features, such as basic phonological knowledge about the nature of phonemic contrast, to be injected into the model. On this graph, we learn the embeddings with the BoxE algorithm, to be discussed below.
## 4.1 Construction Of A Multi-Dialectal
![2_Image_0.Png](2_Image_0.Png) Knowledge Graph
We expressed the data with a knowledge graph and trained the representations through an auxiliary task of completing the multi-dialectal knowledge graph. With a graph-based technique, the representations can be more robust to noisy and porous data.
Additionally, the method will be more flexible, allowing for auxiliary input features to be injected.
We construct a graph by leveraging the characters, as well as individual initials, finals and tones from various dialects as nodes. (See Figure 2) .For instance, the fact of character C having an initial I
in dialect D is modeled with an edge from C to I.
The edge has type specific to the dialect D and the category of the component, which is an initial. This edge type can be denoted as "D-initial". Demonstrated in Fig. 2, C could be character No. 1, when I is /t/ and the edge is "Changsha_initial".
After constructing the graph, character-level and component-level representations are trained simultaneously. The knowledge graph algorithm attempts to model the nodes features as well as a prediction function so that, when given a character node and a type of link, the corresponding pronunciation node can be predicted with maximum likelihood. In this process, the model implicitly generates hypotheses about character pronunciations missing or unseen in training, as well as historical relationships between the syllables.
If there are M characters with readings from N dialects involved in an experiment, the upper bound for the number of edge types will be 3N.
Assuming that F1 + F2 + F3 unique initials, finals and tones could be found within the aggregated phonological systems of the N dialects, the upper bound for number of nodes is M + F1 + F2 + F3.
The graph size scales sub-linearly with the number of dialects, since as more dialects are considered, their phonemic inventories will start to overlap and exhaust.
Following convention in knowledge base research, the graph is presented in Triples of HeadRelation-Tail format.
## 4.2 The Box Embedding Model
In pilot tests, We considered various algorithms from the field of graph representation learning and knowledge base completion for application. In the process, it is revealed that few algorithms are inherently suitable, as there are many subtle requirements in this context:
1. Models designed for knowledge graphs are more suited to this application than general graph learning algorithms, since the graph to be processed is heterogeneous, besides carrying edge type as information.
2. The model must have strong capacity for modeling multiple unique relations between the same two nodes. It is very common for one character to have the same initial across different dialects. This rules out many translationbased models, that, when given different relations, always predict different tail nodes.
Prominent examples of such models include TransE (Bordes et al., 2013) and RotatE (Sun et al., 2019).
3. If the model uses inverse triples as an augmentation technique, then the model should also be expressive in many-to-one and oneto-many relations, because one initial or final will be mapped to numerous characters.
4. Of the applicable algorithms, interpretability should be prioritized, since we hope to extract interpretable phonological knowledge from the obtained representations. This casts doubt
on a another large family of knowledge graph models, namely the bi-linear models, epitomized by RESCAL(Nickel et al.) and DistMult(Yang et al., 2015).
After consideration, we chose BoxE for its expressiveness and tolerance to many-to-one relationships, due to its Box embedding designs. Empirically, we also demonstrate that the BoxE is relatively optimal for the phonological task through comparison with RotatE (Sun et al., 2019) and ComplEx (Trouillon et al., 2016) in Table 4.
Here is a brief description of the BoxE algorithm.
It is a translational model that embeds each node with two vectors: ei, which represents the position vector, and bi ∈ R
d, which represents the translational bump. These vectors are obtained after incorporating triples into the model. Additionally, each edge type is defined with two hyper-rectangles r
(1) and r
(2) ∈ R
d. To satisfy the relation R between entity E1 and E2, there is e1 +b2 ∈ r
(1) and e2 + b1 ∈ r
(2). Intuitively, this means that E1 and E2 "bump" each other in hyperspace R
dby some distance. If the new vectors fall within the bounds of the associated boxes, then the proposition is considered probable. To facilitate gradient descent, the boxes have relaxed borders. It is worth noting that BoxE is also capable of hyper-graph learning as it accepts higher arity relations as input, though we did not exploit this feature for this study.
Our training objective was to maximize the score or probability of given relations. To elaborate, this means maximizing the chance of predicting masked initials/finals/tones of some character in some dialect with the unmasked components associated with that character, from both within and without the dialect. This is analog to the comparative method in Historical Phonology, as the model implicitly reconstructs a latent "proto-language",
from which the descendent languages can be deduced (or, "decoded") with maximum likelihood.
## 5 Data And Experimental Setup
We use pronunciation data from four varieties of Xiang Chinese Changsha 長沙, Shuangfeng 雙峰, Guanyang Wenshi 灌陽文市, and Quanzhou Xiancheng 全州縣城., spoken primarily in Hunan Province, provided by CCR(Huang et al., 2011),
and retrived with Comparative analysis toolset for Chinese dialects(Huang, 2021). We also obtain labels of Middle Chinese readings from the same source. In this work, Middle Chinese refers to the phonological system recorded in the dictionary Qieyun, from the year 601 AD. It was supplemented in the Song Dynasty into the dictionary Guangyun, from which this study draws data. Middle Chinese is literary and may not reflect the colloquial speech of China in any time or place. However, most phonological systems of modern Sinitic languages (with the notable exception of the Min Languages) can be derived from the Qieyun system.
Thus we treat it as a useful protolanguge model for most Sinitic Languages.
We operate on symbolic abstractions instead of raw acoustic data, as all the data have been transcribed into IPA in the database. One row of data corresponds to readings of one Chinese character.
Internally, each character is mapped to a unique identifier, which is the character's serial number in Guangyun. For every variety of Chinese, there are four columns, corresponding to initial value, final value, tonal value and tonal type of a given character's pronuciation. The tone type argument is actually redundant, and it is assigned manually by investigators. In each dialect, there is a one-toone correspondence between one tone value with one tone type. Between two dialects, tones arising from the same Middle Chinese tone are given same names. Hence, the tone type feature introduces prior expert knowledge about the historical origin of tones. However, we expect the model to derive the historical tones without any diachronic expert knowledge. Hence, we discard the tone type feature, and use only the three values for this study.
## 5.1 Processing Of Duplicate Data
Characters in Sinitic can be polyphonic, that is, sometimes a character will be mapped to multiple readings in one dialect. This results in duplicate data in the dataset. For convenience, we drop the extra pronunciations and keep only the first line for every entry. Though, there can be ambiguity surrounding the correspondence of readings for polyphonic characters. For instance, the first reading entry for a polyphonic character in dialect A
might be cognate with the second reading entry for the character in dialect B. However, our naïve approach will match all the first entries to each other. Additionally, two dialects may inherit only partial readings of a polyphonic character in the proto-language. Hence, this procedure potentially introduces erroneous alignment into the model.
## 5.2 Split Of Training, Testing And Validating Datasets
The model was not trained with all the data, so as to examine the robustness of the model. Instead, some triples are diverted to form testing and validating datasets. Unfortunately, assignment in this context is slightly more complicated than simple stochastic choice. There is the scenario where all initial (final/tonal) information about one character is diverted from training. In this case, the model will not be able to correctly embed this character. To circumvent this issue, we mandate that at least one feature from any of the three compositional types is retained in the training set for any character. In the four Xiangyu in this case, the result is empirically a split of 80.50%:12.52%:6.98%.
## 5.3 Data Statistics
The initials, finals and tones count for the four dialects are listed in Table 1. A total of 2805 characters is included, but not every character has the corresponding phonological data documented in every dialect. In the training set, there are 22300 entries.
## 5.4 Model Setup
For the parametric size of the model, see Table 2.
We employ the BoxE algorithm implemented in the Python library PyKeen (Ali et al., 2021b,a). We did not fine-tune the model or any model parameters, so as to demonstrate the capability of the model in even in a highly suboptimal setting.
| Initials | Finals | Tones | |
|------------|----------|---------|----|
| Changsha | 21 | 38 | 11 |
| Shuangfeng | 28 | 35 | 11 |
| Guanyang | 28 | 42 | 5 |
| Quanzhou | 26 | 43 | 4 |
Table 1: Data Statistics
| Parameter | Value |
|-------------------------------|---------|
| Vector and hyperbox dimension | 64 |
| Number of nodes | 2946 |
| Number of edge types | 12 |
| Cumulative parameter size | 378624 |
| Optimization algorithm | Adam |
| Number of epochs | 2000 |
Table 2: Model Parameters
## 6 Experimental Evaluation
![5_Image_0.Png](5_Image_0.Png)
![5_image_1.png](5_image_1.png)
## 6.1 Canonical Evaluation Of Model
The convergence of the model, and a preview of the spatial distribution of embeddings can be seen in Figure 3. The model quickly converges. The entity plot decomposed with PCA reveals a mass of character readings "ejecting" two groups of entities, respectively the combination of all initials and tones, and all finals, which is in accordance with the bi-partite and heterogeneous nature of this graph.
Canonically, BoxE is evaluated with the hit@n metric and MRR (mean reciprocal rank) for link prediction. On the validation set, our model achieved hit@1:51.25%, hit@5: 87.19%, hit@10:
93.76% on the "tail" batches. The head batches are not relevant because they involve "predicting characters from initials/finals", of which there is many to one. In Table 4, we demonstrate empirically the superiority of the BoxE algorithm over other common knowledge graph algorithms on this phonological task. A clearer visualization of the embedded points can be seen in Figure 4. Guangyun ensures that rhyming characters (having the same final) have similar coloring on the map. The coloring is only a reflection of the point's serial in the dataset and does not have any quantitative interpretation.
Presumably, the translational bump for characters will contain more relevant information to historical phonology, as they designate which component types to "bump into the box." Without mention, all experiments are carried out on the bump embeddings and not positions. However, empirically we find that the two kinds of embeddings are interchangeable.
## 6.2 Examining Contrastive Information
In this section, unsupervised clustering is used to evaluate contrastive information in the embeddings.
Based on the hypothesis that the phonological structures of the dialects are co-embedded in the latent structure of embeddings, we determined if the highdimensional embeddings retain information associated with the theoretic categories of the input dialects, a similar task to Tilsen et al. 2021. After applying a clustering algorithm to the embedded characters, the information yield 1 of the found categories against input categories of initials, finals and tones is computed. A higher information yield indicates that the clusters found by unsupervised clustering were more interpretable with respect to the input phonemic categories. 2 3 The clustering algorithms used for dissecting the cloud of embedded characters include HDBSCAN (McInnes and Healy, 2017,A density based method), Affinity Propagation, K-means and Agglomerated Clustering.4 The results can be seen in Figure 5.
Affinity propagation and HDBSCAN achieved best effects on finding interpretable clusters from the datasets. Though, we find that HDBSCAN
is very sensitive to the two parameters: its effect degrades when we allow for smaller clusters but demands greater confidence on the classification.
Notably, HDBSCAN achieved an effect similar to affinity propogation with just 29 clusters, while the latter used 130.
The large information yields reflect that the unsu-1Entropy subtracted by conditional entropy, or an empirical estimate of mutual information.
2HDBSCAN sometimes refuses to classify points it is not sure of. These points are combined into one category for the aforementioned purpose.
3Before using HDBSCAN, UMAP was first used to reduce the 64 embedding dimensions to 8 dimensions, with the neighbour parameter set to 50. This is an advised practice from the HDBSCAN documentation.
4The numerous methods were tried sequentially as we do not know which algorithm best recovers the latent structure of representations in accordance with theoretic categories.
![6_image_0.png](6_image_0.png)
pervised algorithms do tend to dissect the character set along latent lines corresponding to phonological opposition in the input dialects, as shown in a partial observation in Table 3. It appears that the distribution of finals in dialects had more influence on the latent structure than initials or tones. Simply put, the characters within each unsupervised cluster are more likely to rhyme than alliterate, though both cases occur in observation of the HDBSCAN Clusters.
There are limitations to this experiment though, which will be discussed below.
## 6.3 Inference Of Proto-Language Features
In this section, we investigate the quality of our embeddings with respect to proto-language reconstruction tasks, as an important potential application of this method lies with such work. Hence, we trained classifiers in attempt to infer labels from Middle Chinese, which likely predates proto-Xiang, therefore an accessible surrogate for that protolanguage.
The features to infer are Grades (等地), Voice(清 濁), Tones(聲調), She (攝, a coarse division of finals), Initials (字母), and Mu(韻目,a fine division of finals).
Grades are believed to be associated with medials, a component in the front of the final (amalgamated with final in Xiangyu data). Voice is a division based on properties of the initial, in which voiced consonants, voiceless unaspirated consonants, voiceless aspirated consonants and nasal consonants are distinguished. For tones, in Middle Chinese, there were four: level, rising, departing, and entering. Of these categorical labels, there are respectively 4, 4, 4, 16, 36 and 206 unique classes.
5 For this experiment, a train-test split of 0.670.33 was instated. Since phonological evolution is quite regular and systematic, we should expect decent results without a great proportion of data used for training. Accuracies below are for the test set.
These values are consistently higher than a naïve baseline of guessing the mode of each distribution, proving that proto-language related features were preserved in the retrived embeddings. (See Table 5.)
The MLP generally outperforms Ridge Classification on inference for these characters, with the sole exception of tones, where RC outperforms MLP by 1.1%. The best results are attained for tones and voice, showing these features to be phonologically well preserved from Middle Chinese to Xiang languages.
Interesting observations can be drawn from the confusion matrices generated with such classification. Presumably, these matrices can offer insight 5Canonically so, but there are a few erroneous entries in the data we used, resulting in sometimes one or two extra categories containing a few characters. They were kept.
| ID | Changsha | Shuangfeng | Guanyang | Quanzhou |
|------|---------------|---------------|---------------|---------------|
| 0 | Initial:/m/ | Initial:/m/ | Initial:/m/ | Initial:/m/ |
| 1 | Initial:/ph / | Initial:/ph / | Initial:/ph / | Initial:/ph / |
| 2 | Final:/˜in/ | Final:/˜i/ | Final:/ i˜E/ | Final:/ ieN/ |
| 7 | Final:/(u)ei/ | Final:/ui/ | Final:/ uEi/ | Final:/uei/ |
Table 3: Analysis of Selected HDBSCAN Clusters. In these clusters, characters are predominantly, but not exclusively associated with the listed features.
Alg. (Metric %) Hit@1 Hit@5 **Hit@10** BoxE 51.25 87.19 **93.76**
RotatE 33.11 57.47 66.18 ComplEx 9.40 24.65 35.37
into what categories were blended, which oppositions were lost during the development of some language family. One such example is demonstrated
![7_image_0.png](7_image_0.png)
in Figure 6. It could be seen that there is large confusion between the Xian 咸, Dang 宕 and Shan 山 Shes, and also between Xie 蟹 and Zhi 止 Shes.
6 This could indicate that in Proto-Xiang, there is confusion between these categories relative to Middle Chinese.
## 7 Discussions
Our current setting only operates on pre-abstracted symbols and lacks incorporation of acoustic or articulatory evidence. Incorporating multi-modal data into a knowledge graph framework could enhance the quality of embeddings and enable more accurate representations of phonological features.
Alsp, the proposed method uses shared embeddings for symbolic components across different dialects, which cannot fully capture dialect-specific variations. Investigating contextualized or dialectspecific component embeddings could improve the model's ability to capture finer-grained phonological distinctions. Finally, phonetically similar components are currently treated as independent items, which is too absolute an assumption. However, it is also possible for phonetic cues to override the correct phonological alignment in the model. In many cases, phonetic similarity does not imply diachronic homology. Two phonetically equivalent syllables from two different dialects may have different origins. Conversely, two phonetically distinct syllables from two different dialects may be cognate. The subtle balance between "phonetic" and "phonological" proximity requires further discussion.
Several lines of research may benefit from robust multi-dialectal representations. In dialectology, there is need for estimating divergence between phonological systems. That includes the divergences between its constituents, such as individual characters, phonemes and syllables. With multi-dialectal representations, this divergence can be estimated quantitatively. In historical phonology, the reconstruction of a proto-language demands deep scrutiny of dialect systems whose efficiency can be improved with manipulating the representations. Also, they can be used for completion of the phonological knowledge base. Often knowledge bases for Sinitic phonology are fragmented, due to imperfect surveys and heterogeneity of sources, etc.
The representations can be used to infer missing
| Algorithm(Acc %) | Grades | Voice | Tones | She | Initials | Mu |
|----------------------|----------|---------|---------|-------|------------|------|
| Ridge Classification | 65.3 | 76.4 | 84.1 | 54.6 | 49.4 | 18.6 |
| MLP | 70.5 | 81.1 | 83.0 | 61.4 | 53.2 | 26.9 |
| Naïve Baseline | 48.4 | 35.4 | 35.6 | 15.3 | 8.1 | 1.8 |
![8_image_0.png](8_image_0.png)
pronunciations in different dialects to improve the quality of observations.
The graph-based method proposed in this paper benefits from phonological characteristics specific to Sinitic languages, but is also limited by these characteristics. Specifically, the process of constructing a phonological graph from words, as proposed in this study, is less natural in languages where words typically have many syllables, and vary in the number of syllables contained. In these languages, the temporal interaction of syllables within a word is a new phenomena that the graphbased method needs to adapt to. Additionally, in these languages, it will be less straightforward to tokenize the words into expressive sub-words to use as nodes in the graph. Presumably, in non-Sinitic languages, the proposed method will be most performant in other languages of the Southeast Asian Sprachbund, such as those in the Hmong-Mien or Austroasiatic families. These languages share phonological features with Sinitic languages that enable our method. On the other hand, this method will likely meet more complications outside of the local sprachbund.
## 8 Conclusion
This paper demonstrated the potential of graphbased representation learning in Chinese Historical Phonology. The representations are potent in many ways, i.e. facilitating the reconstruction of minor proto-languages.
In the future, more sophisticated techniques such as deep learning models could be explored to further improve the quality of the obtained representations. Furthermore, the proposed method can be integrated with other linguistic resources, such as recordings, articulatory time series, or orthographic corpora, to enrich the knowledge base and improve the accuracy of reconstructions. With the development of modern, massive linguistic datasets such as Nk2028(nk2028, 2020), CogNet(Batsuren et al., 2022) or MorphyNet(Batsuren et al., 2021) as well as improvements in large pre-trained models, we can expect foundational models that possess emergent and meta-generalizing capabilities to arise in historical phonology or morphology. This avenue of research holds great promise for advancing our understanding of the phonology and evolution of Sinitic languages, and potentially other language families as well.
## Limitations
This study stems from a novel idea for Chinese Historical Phonology Studies. As few direct predecessors could offer hindsight, there are quite a few limitations to this study that may be addressed with further work.
1. While the initial-final-tone decomposition is convenient in this context, it also limits the transferrability of the proposed tool to languages outside of the Sinosphere. This calls for further exploration of more generalizeable approaches to phonological representation learning.
2. Polyphonic characters were not fully utilized in the study, and their alignment perreading and tokenization into separate identifiers should be considered in future work.
3. Finally, making full use of the dataset is crucial, and the stochastic train-test split used in this study may leave out important hints.
Alternative sampling strategies, such as crossvalidation or bootstrapping, could enhance the robustness of the results.
## Acknowledgements
We are grateful for the valuable advice and feedback we received from various peers during the course of this work. Without their contributions, this research would not have been possible.
## References
Ralph Abboud, Ismail Ceylan, Thomas Lukasiewicz, and Tommaso Salvatori. 2020. BoxE: A Box Embedding Model for Knowledge Base Completion. In Advances in Neural Information Processing Systems, volume 33, pages 9649–9661. Curran Associates, Inc.
Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Mikhail Galkin, Sahand Sharifzadeh, Asja Fischer, Volker Tresp, and Jens Lehmann. 2021a.
Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge Graph Embedding Models under a Unified Framework. *IEEE Transactions on Pattern* Analysis and Machine Intelligence, pages 1–1.
Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Sahand Sharifzadeh, Volker Tresp, and Jens Lehmann. 2021b. PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings. *Journal of Machine Learning Research*, 22(82):1–6.
Khuyagbaatar Batsuren, Gábor Bella, and Fausto Giunchiglia. 2021. MorphyNet: a Large Multilingual Database of Derivational and Inflectional Morphology. In *Proceedings of the 18th SIGMORPHON*
Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 39–48, Online.
Association for Computational Linguistics.
Khuyagbaatar Batsuren, Gábor Bella, and Fausto Giunchiglia. 2022. A large and evolving cognate database. *Language Resources and Evaluation*,
56(1):165–189.
William H. Baxter and Laurent Sagart. 2014. Old chinese: A new reconstruction.
Gasper Begus. 2020. Modeing unsupervised phonetic and phonological learning in Generative Adversarial Phonology. In *Proceedings of the Society for Computation in Linguistics 2020*, pages 38–48, New York, New York. Association for Computational Linguistics.
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating Embeddings for Modeling Multirelational Data. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.
Rongpei Huang, Xiufang Yang, and Daan He. 2011.
Chinese Character Readings. https://xiaoxue. iis.sinica.edu.tw/ccr/\#. Retrieved March 26, 2023.
Yihua Huang. 2021. Comparative Analysis Toolset for Chinese Dialects. https://github.com/ lernanto/sinetym. Retrieved March 26, 2023.
Bai Li, Jing Yi Xie, and Frank Rudzicz. 2020. Representation Learning for Discovering Phonemic Tone Contours. In Proceedings of the 17th SIGMORPHON
Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 217–223, Online.
Association for Computational Linguistics.
Johann-Mattis List. 2015. Network perspectives on chinese dialect history. *Bulletin of Chinese linguistics*,
8:27–47.
Johann-Mattis List. 2018. More on network approaches in historical chinese phonology ().
Johann-Mattis List, Simon J. Greenhill, and Russell D.
Gray. 2017. The Potential of Automatic Word Comparison for Historical Linguistics. *PLOS ONE*,
12(1):e0170046.
Johann-Mattis List, Nelson-Sathi Shijulal, William F.
Martin, and Hans Geisler. 2014. Using phylogenetic networks to model chinese dialect history.
Han Ma, Roubing Tang, Yi Zhang, and Qiaoling Zhang.
2022. Survey on speech recognition. *Computer* Systems and Applications, 31(1):1–10.
Ilya Makarov, Dmitrii Kiselev, Nikita Nikitinsky, and Lovro ubelj. 2021. Survey on graph embeddings and their applications to machine learning problems on graphs. *PeerJ Computer Science*, 7.
L. McInnes, J. Healy, and J. Melville. 2018. UMAP:
Uniform Manifold Approximation and Projection for Dimension Reduction. *ArXiv e-prints*.
Leland McInnes and John Healy. 2017. Accelerated hierarchical density based clustering. In *Data Mining Workshops (ICDMW), 2017 IEEE International* Conference on, pages 33–42. IEEE.
Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. 2018. Umap: Uniform manifold approximation and projection. *The Journal of Open* Source Software, 3(29):861.
Carlo Meloni, Shauli Ravfogel, and Yoav Goldberg.
2021. Ab antiquo: Neural proto-language reconstruction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4460–4473, Online. Association for Computational Linguistics.
John Nerbonne, T. Mark Ellison, and Grzegorz Kondrak.
2007. Computing and historical phonology. In *Proceedings of Ninth Meeting of the ACL Special Interest* Group in Computational Morphology and Phonology on - SigMorPhon '07, pages 1–5, Prauge, Czech Republic. Association for Computational Linguistics.
Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A Three-Way Model for Collective Learning on Multi-Relational Data.
nk2028. 2020. Qieyun-js. https://github.com/
nk2028. Retrieved March 26, 2023.
Cory Shain and Micha Elsner. 2019. Measuring the perceptual availability of phonological features during language acquisition using unsupervised binary stochastic autoencoders. In Proceedings of the 2019 Conference of the North, pages 69–85, Minneapolis, Minnesota. Association for Computational Linguistics.
Zhongwei Shen. 2020. A phonological history of chinese.
Lydia Steiner, Michael Cysouw, and Peter Stadler. 2011.
A Pipeline for Computational Historical Linguistics.
Language Dynamics and Change, 1(1):89–127.
Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space.
ArXiv:1902.10197 [cs, stat].
Sam Tilsen, Seung-Eun Kim, and Claire Wang.
2021. Localizing category-related information in speech with multi-scale analyses. *PLOS ONE*,
16(10):e0258178.
Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016.
Complex Embeddings for Simple Link Prediction.
ArXiv:1606.06357 [cs, stat].
Pascal Vincent, H. Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders.
In *International Conference on Machine Learning*.
Meihong Wang, Linling Qiu, and Xiaoli Wang. 2021.
A survey on knowledge graph embeddings for link prediction. *Symmetry*, 13:485.
Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases.
ArXiv:1412.6575 [cs].
Jiaxuan You, Xiaobai Ma, Daisy Ding, Mykel Kochenderfer, and Jure Leskovec. 2020. Handling missing data with graph representation learning. *NeurIPS*.
Xiangru Zhu, Zhixu Li, Xiaodan Wang, Xueyao Jiang, Penglei Sun, Xuwu Wang, Yanghua Xiao, and Nicholas Jing Yuan. 2022. Multi-modal knowledge graph construction and application: A survey. *ArXiv*,
abs/2202.05786. |
wang-etal-2023-prompt | Prompt-based Zero-shot Text Classification with Conceptual Knowledge | https://aclanthology.org/2023.acl-srw.4 | In recent years, pre-trained language models have garnered significant attention due to their effectiveness, which stems from the rich knowledge acquired during pre-training. To mitigate the inconsistency issues between pre-training tasks and downstream tasks and to facilitate the resolution of language-related issues, prompt-based approaches have been introduced, which are particularly useful in low-resource scenarios. However, existing approaches mostly rely on verbalizers to translate the predicted vocabulary to task-specific labels. The major limitations of this approach are the ignorance of potentially relevant domain-specific words and being biased by the pre-training data. To address these limitations, we propose a framework that incorporates conceptual knowledge for text classification in the extreme zero-shot setting. The framework includes prompt-based keyword extraction, weight assignment to each prompt keyword, and final representation estimation in the knowledge graph embedding space. We evaluated the method on four widely-used datasets for sentiment analysis and topic detection, demonstrating that it consistently outperforms recently-developed prompt-based approaches in the same experimental settings. | # Prompt-Based Zero-Shot Text Classification With Conceptual Knowledge
Yuqi Wang1,3, Wei Wang1, Qi Chen1, Kaizhu Huang2, Anh Nguyen3**, Suparna De**4 1Xi'an Jiaotong Liverpool University, China 2Duke Kunshan University, China 3University of Liverpool, United Kingdom 4University of Surrey, United Kingdom [email protected], {wei.wang03,qi.chen02}@xjtlu.edu.cn, [email protected], [email protected], [email protected]
## Abstract
In recent years, pre-trained language models have garnered significant attention due to their effectiveness, which stems from the rich knowledge acquired during pre-training. To mitigate the inconsistency issues between pre-training tasks and downstream tasks and to facilitate the resolution of language-related issues, promptbased approaches have been introduced, which are particularly useful in low-resource scenarios. However, existing approaches mostly rely on verbalizers to translate the predicted vocabulary to task-specific labels. The major limitations of this approach are the ignorance of potentially relevant domain-specific words and being biased by the pre-training data. To address these limitations, we propose a framework that incorporates conceptual knowledge for text classification in the extreme zero-shot setting. The framework includes prompt-based keyword extraction, weight assignment to each prompt keyword, and final representation estimation in the knowledge graph embedding space. We evaluated the method on four widelyused datasets for sentiment analysis and topic detection, demonstrating that it consistently outperforms recently-developed prompt-based approaches in the same experimental settings.
## 1 Introduction
Numerous studies have achieved great success in applying supervised natural language processing
(NLP) techniques to address a plethora of NLP
applications, including text classification (Dong et al., 2019), natural language inference (Wang et al., 2020) and neural machine translation (Mi et al., 2016). However, achieving high accuracy with deep learning models for textual data analysis necessarily requires a large amount of manually annotated samples, which is both time-consuming and labour-intensive.
To address the issues in low-resource settings, considerable attention has been paid to the pretrained language models (PLMs), such as GPT-3
![0_image_0.png](0_image_0.png)
Figure 1: An example of prompt-based text classification for the binary sentiment analysis task.
(Brown et al., 2020), BERT (Devlin et al., 2019),
and Roberta (Liu et al., 2019), due to their superior performances on knowledge transfer. The model pre-training stage typically involves language modelling tasks, i.e., word prediction based on the context of the input. Extensive investigations, e.g.,
knowledge probing, on PLMs show that they have a certain capacity to store both linguistic and relational knowledge from large-scale corpora of general domain data (Petroni et al., 2019).
In recent years, the paradigm of NLP has been shifted from "pre-train and fine-tune" to "pre-train and prompt" (Liu et al., 2023), to fully exploit these PLMs in a gradient-free manner and effectively mitigate the gap between pre-training tasks and downstream tasks for the extreme zero-shot scenario
(Yin et al., 2019). Specifically, in the prompt-based approaches (Schick and Schütze, 2021; Min et al., 2022; Gao et al., 2021a), each sample in NLP tasks can be wrapped into cloze-style questions with their corresponding templates, prompting the PLMs to generate the targeted output to solve the problem.
For example, in a binary sentiment analysis task
(shown in Figure 1), the text "*no apparent joy*" is transformed to the prompt-augmented input "*no apparent joy. It was <mask>.*", where the *<mask>* is a special token to be predicted by the PLMs. This text will then be labelled as positive or negative according to the predicted words. Most existing works utilize a verbalizer to provide the translations from the predicted vocabulary to the label space in a specific task (Schick and Schütze, 2021). However, these approaches are subject to two significant limitations: (i) by only considering a limited set of pre-defined label words filled in the masked position, some potentially relevant or useful words in the certain domain could be ignored, hindering the model's capacity to generalize; and (ii) the pretraining data of PLMs may contain biases that are reflected in the model's predictions on downstream tasks (Zhao et al., 2021). Some works propose calibration strategies to adjust the distribution of prior probabilities (Hu et al., 2022), which requires access to a large amount of data in specific datasets for true estimation.
In this work, we propose a framework to perform prompt-based zero-shot text classification with conceptual knowledge and overcome the above limitations. The proposed framework includes promptbased keyword extraction, weight assignment to each keyword in the meaningful semantic space, and final representation estimation. Specifically, in the weight assignment component, by leveraging the contextual relationships captured by SimCSE
(Gao et al., 2021b), a powerful contrastive learning model, we refine the probabilities of each keyword being filled in the masked position from the language prompt to mitigate the bias. Additionally, in the final representation, we integrate structured factual data provided by the knowledge graphs (KGs)
to include a wider range of semantic relationships between entities in a given domain. By combining their strengths, the proposed framework enables more informed predictions and a richer understanding of the underlying domain. In the experiment, we strictly follow the "label-fully-unseen" setting proposed by Yin et al. (2019) for evaluation. We employ four widely-used text classification datasets and compare the proposed framework with several recently-developed prompt-based approaches under the same experimental settings. The result indicates that our proposed framework brings significant improvement to the model performance.
## 2 Related Works
Language prompt has been introduced to elicit knowledge from PLMs to solve different NLP
tasks, which was inspired by a series of works related to prompt-based approaches, including GPT-3
(Brown et al., 2020) and PET (Schick and Schütze, 2021). However, one issue under the zero-shot setting identified by Chen et al. (2022) is the lack of domain adaptation. They performed prompt-aware continual pre-training based on adaptively retrieved data for better performance on text classification tasks. To widen the coverage of label words, Hu et al. (2022) incorporated external knowledge bases for the verbalizer construction, which greatly improved the stability.
The above-mentioned works used hand-crafted prompt templates, particularly designed by humans for various NLP tasks. While they are carefully constructed, the process requires a considerable amount of human effort. Several automatic prompting techniques were introduced to automatically select a prompt based on the input provided to the PLMs. Gao et al. (2021a) suggested to employ a pre-trained text-to-text transformer, T5 (Raffel et al., 2020), for candidate template generation. The best language prompt can be derived after the evaluation of each candidate template.
Shin et al. (2020) proposed a gradient-based approach to search for a set of impactful tokens as the prompts that can cause significant changes in the model's output. Nevertheless, the quality of the automatically generated prompt usually cannot be guaranteed, and this approach lacks sufficient interpretability. Besides discrete prompts, research such as (Li and Liang, 2021) and (Gu et al., 2022)
presented continuous prompts as prefixes to the input, which are continuous vectors that can be learned based on patterns and structures from the data. This approach avoids the hassle of explicit prompt design while it introduces a large number of new parameters to be optimized.
## 3 Methodology
We propose a prompt-based approach to tackle the zero-shot text classification problem. The overall framework is shown in Figure 2. We first extract the keywords to summarize the input text with the prompt-based approach. Then, we assign weights to these keywords based on their semantic relevance to the overall meaning of the text. The weighted embeddings of all extracted keywords in the knowledge graph (KG) embedding space are aggregated to produce the final representation of the input text. Finally, we determine if the text is related to a label in the KG according to their cosine similarity. In the following subsections, we describe the task definition in the extreme zero-shot setting, prompt-based keyword extraction, weight assignment and final representation estimation in the constructed KG embedding space.
## 3.1 Task Definition
Given n textual inputs X = {x1, x2*, ..., x*n}, the aim of the text classification task is to assign each input x a label y from a fixed label set containing m labels, i.e., Y = {y1, y2*, ..., y*m}. Unlike the label-partially-unseen zero-shot text classification, where a part of labelled data is available for model training or fine-tuning on a specific domain, in this work, all samples are unseen, and only the label names from the label set Y can be accessed in advance. In order to achieve this goal, it is essential to ensure that the aspect being described in the input text and the meanings of the labels are comprehensible to the framework (Yin et al., 2019).
## 3.2 Prompt-Based Keyword Extraction
To remove noise and preserve the most relevant information, keyword extraction from the input text can summarize its main content and identify the most important concepts. The meaning of an expression, particularly its implicit meaning, can often be inferred from the context in which it is used. Therefore, we first employ a contextualized pre-trained masked language model, denoted as M, for prompt-based keyword extraction. This model has an MLM head on top of the transformerbased architecture, and consequently, it reduces the text classification to the MLM problem with a taskspecific template t, which is either added at the beginning or the end of the original input to form a prompt-augmented input. The template includes a mask token *<mask>*, and the probability of each word v from vocabulary V being filled in this position can be predicted by M. The most likely words generated in this manner are somewhat relevant to the input context, as the model integrates contextual information to make predictions. We then construct a keyword set for x, namely, V
x, i.e.,
$$\mathcal{V}^{x}=\operatorname*{top}_{v\in\mathcal{V}}K\left[P_{\mathcal{M}}\left(\text{\`}m a s k>=v|[x;t]\right)\right]\quad\quad(1)$$
where [x;t] is the prompt-augmented input for x.
PM(|·) is the conditional probability generated by the MLM head of M. According to the observations by Meng et al. (2020), the top 50 probable words usually well represent the mask. Hence, we set the parameter K to 50.
## 3.3 Weight Assignment
To estimate the text representation for the input, each word in the V
xshould be associated with a weight, indicating relevance and importance to the original textual input. Directly using the probability output by the MLM head could be one possible solution. However, the masked language model may produce a biased probability distribution over the keyword set.
To address this issue, we utilize SimCSE (Gao et al., 2021b), a Siamese network for simple contrastive learning, to assign weights to each word.
SimCSE employs entailments and contractions from natural language inference (NLI) datasets as supervised signals. In contrastive loss, the premise and entailment hypothesis are considered positive pairs, while in-batch negatives and contradiction hypothesis are treated as negative pairs. This approach helps align semantically similar sentence embeddings while separating contradicted/unrelated sentence embeddings.
We use the encoding function for SimCSE fθ(·),
parametrized by θ, to transform both the original input x and a template in which the mask token has been replaced by the k-th word in V
x, denoted as t˜k, into a meaningful semantic space. We then assign the weight wito the i-th word in V
x based on the similarity between t˜i and x, i.e.
$$w_{i}=\frac{e^{\text{sim}\big{(}f_{\theta}(x),f_{\theta}(\bar{t}_{i})\big{)}}}{\sum_{k=1}^{K}e^{\text{sim}\big{(}f_{\theta}(x),f_{\theta}(\bar{t}_{k})\big{)}}}\tag{2}$$ where $\text{sim}(\cdot)$ is the cosine similarity function.
where sim(·) is the cosine similarity function.
## 3.4 **Final Representation In Knowledge Graph** Embedding Space
As for the extreme zero-shot scenario in our work, ideally, each label y in the label set Y should be equipped with auxiliary information, e.g., a textual description and hand-engineered attributes. Nevertheless, such information available for a particular task is usually limited and may not provide a precise description of the label. Fortunately, there is a source of external knowledge that can be applied with little human effort - KGs. ConceptNet (Speer et al., 2017) is a type of KG that organizes and represents linked open data regarding real-world entities and their relations, offering rich structured knowledge at the conceptual level for the labels.
To leverage the knowledge from the ConceptNet, a process called retrofitting (Faruqui et al., 2015)
is used to refine the pre-trained distributional word
![3_image_0.png](3_image_0.png)
embeddings. The idea is to bring the embeddings of connected entities in the KG closer while maintaining the original distributional ontology (Speer et al., 2017).
The following objective function is minimized to construct the KG embedding space based on the entity set, denoted as V
ent:
$$\sum_{v_{i}\in\mathcal{V}^{\text{out}}}\left[\sum_{(v_{i},r,v_{j})\in\mathcal{E}}\lambda_{r}\left(\mathbf{v}_{i}-\mathbf{v}_{j}\right)^{2}+\eta_{i}\left(\mathbf{v}_{i}-\hat{\mathbf{v}}_{i}\right)^{2}\right]\tag{3}$$
where E is the triplet set of the KG, consisting of two entities vi and vj linked by their relation r, i.e.,
(vi*, r, v*j ), and λr is the corresponding weight for r. viis the updated KG graph embedding for the entity vi. vˆi stands for the original word embedding of vi and ηi controls the associative strength between vˆi and vi. For simplicity, we applied the alignment by the name to align the entity in V
ent with a word in V.
To estimate the final representation in the KG
embedding space for input text x, we integrate the conceptual representation of each keyword viin V
x based on semantic relevance between vi and x. Our assumption for the multi-class classification task is that the content of input text should remain within its desired label and not be relevant to any other labels in the label set. Therefore, the label with the highest similarity to this representation, among all labels in Y, is then selected as the predicted label, denoted by yˆ, i.e.
$$\hat{y}=\underset{y\in\mathcal{Y}}{\operatorname{argmax}}\ \left[\underset{v_{y},\ \sum_{v_{i}\in\mathcal{V}^{x}}w_{i}\mathbf{v}_{i}}{\operatorname{lim}}\right]\tag{4}$$ where $\mathbf{v}_{y}$ is the label embedding for $y$ in the KG.
embedding space.
## 4 Preliminary Results 4.1 Datasets
We conducted experiments on four commonly used text classification datasets, including two sentiment analysis datasets (SST-2 (Socher et al., 2013) and Yelp-polarity (Zhang et al., 2015)) and two topic detection datasets (AG's News (Zhang et al., 2015) and DBPedia (Lehmann et al., 2015)). We adopted the prompt templates from (Chen et al., 2022) for better comparison. For each dataset, we evaluated our method on different templates and reported their average accuracy along with standard deviation. The statistics and example prompt templates of these datasets are listed in Table 1.
## 4.2 Setup
For the prompt-based keywords extraction and weight assignment, we made use of roberta-large
Datasets #Samples #Classes Type Example Prompt
SST-2 1,821 2 Sentiment All in all, it was *<mask>* Yelp-polarity 38,000 2 Sentiment All in all, it was *<mask>* AG's News 7,600 4 Topic This topic is about *<mask>* DBPedia 70,000 14 Topic Introduction to the *<mask>*
models with transformers1and simcse2libraries.
We used the latest version of ConceptNet (5.7) 3 for KG embedding space construction.
We implemented our method with PyTorch 1.5.0 and Python 3.6 on IBM Power 9 architecture. The inference process was accelerated on an NVIDIA
Tesla V100 Volta GPU card with 32GB of graphics RAM.
## 4.3 Main Results
We compared the results with those produced by several prompt-based methods for text classification introduced recently, which share the same extreme zero-shot setting. The main results on the four datasets are shown in Table 2. Channel is the noisy channel approach based on GPT-2 proposed by Min et al. (2022). GPT-3 refers to the work of Zhao et al. (2021) that calibrated the probability distribution with a content-free input. The results of applying Roberta for prompt-based text classification were reported by Chen et al. (2022).
AdaPrompt (Chen et al., 2022) refers to the method that adaptively retrieves data from large-scale corpora for continual pre-training, and iAdaPrompt is the process of iterative adaption.
It is clear that the proposed method outperformed the baselines on all datasets, providing a performance gain of 13.88% and 5.31% on Yelppolarity and AG's News datasets, respectively. Another notable observation from the main results is that our method has significantly lower standard deviations in comparison with Roberta, AdaPrompt and iAdaPrompt, suggesting that it is more stable when using different prompt templates for text classification.
## 4.4 Ablation Study
We also carried out ablation experiments to explore the effectiveness of weight assignment and KG embedding space construction in the proposed 1https://huggingface.co/transformers 2https://pypi.org/project/simcse/
3https://github.com/commonsense/conceptnetnumberbatch framework. The result of the study is shown in Table 3.
Instead of assigning weights to each keyword based on their importance and relevance as explained in Section 3.3, we directly utilized probabilities of masked token output by the MLM head.
This resulted in a slight decrease in performance, with an average accuracy drop of 0.87%. Then, we replaced the KG embeddings for text representation estimation with another semantically consistent embedding, GloVe (Pennington et al., 2014), which is solely based on the word co-occurrence in the pretraining corpus. We observe significant decreases in accuracy on AG's News and DBPedia datasets by 19.3% and 14.4%, respectively. This indicates that, compared with distributional semantic embedding space, incorporating knowledge to construct KG embedding space can greatly enhance the performance of text classification, especially on topic detection datasets.
## 4.5 Visualization
To further understand the weight assignment, we provided the visualization (shown in Figure 3) of each extracted keyword from examples in topic detection datasets. We arranged these words in descending order of probabilities output by the MLM head. The colour depth denotes the importance of each word according to the given context.
As can be seen, many of the most significant keywords (indicated as dark colours) were correctly highlighted. For example, "*rocket*", "*space*" and
"*launch*" in AG's News example; "store", "company" and "business" in DBPedia example. We also observed that some less related or wronglypredicted words could be detected by the model.
For example, the DBPedia example mainly describes a game company, even though the words like "*author*" and "*blog*" predicted by the MLM
head are at the top of the list, they were assigned with low weights (indicated as light colours) in the weight assignment process, which makes reasonable amendments to the prompt-based keywords
Models SST-2 Yelp-polarity AG's News DBPedia
Channel (Min et al., 2022) 77.10 (N/A) - 61.80 (N/A) 51.40 (N/A)
GPT-3 (Zhao et al., 2021) 75.80 (0.00) - 73.90 (0.00) 59.70 (0.00) Roberta (Chen et al., 2022) 64.56 (16.77) 72.63 (6.34) 69.52 (6.96) 56.32 (0.49) AdaPrompt (Chen et al., 2022) 75.92 (17.36) 75.09 (17.57) 76.55 (7.28) 70.95 (8.80) iAdaPrompt (Chen et al., 2022) 77.18 (17.96) 75.81 (18.05) 74.28 (9.00) 73.01 (6.70)
Ours **80.62 (10.08) 89.69 (2.81) 81.86 (0.75) 73.77 (2.55)**
Table 2: Main results on four commonly-used datasets. We report the average accuracy on different templates and the corresponding standard deviation, which is indicated in brackets.
## Extraction.
We also demonstrated an example of KG embeddings to show how knowledge integration can help language understanding in Figure 4. We randomly selected a number of generated keywords from samples labelled as "sport","politics", "business" and
"technology", and utilized the visualization tool, t-SNE4, to visualize their corresponding entity embeddings in the two-dimensional space. The colour of each point in the figure indicates the label of the sample from which the keywords were generated.
It is observable that entity embeddings assigned to different labels are well distributed across the KG
embedding space, indicating that knowledge integration can help capture diverse conceptual aspects of the entities. On the contrary, the embeddings assigned to the same label are well clustered, suggesting that entities with similar properties are mapped closely together in the KG embedding space.
## 5 Conclusion
We proposed a prompt-based framework to tackle the text classification problem in the extreme zeroshot setting. We exploited the PLM to extract keywords from input, assigned their weights in the meaningful semantic space and incorporated conceptual knowledge from ConceptNet to estimate the final representation. Evaluation results showed
| SST-2 | Yelp-polarity | AG's News | DBPedia | |
|---------|-----------------|--------------|---------------|--------------|
| Ours | 80.62 (10.08) | 89.69 (2.81) | 81.86 (0.75) | 73.77 (2.55) |
| -WA | 79.42 (10.91) | 88.82 (3.08) | 81.65 (0.79) | 72.59 (2.86) |
| 4 | -1.20 | -0.87 | -0.21 | -1.18 |
| -KG | 77.58 (10.27) | 86.61(4.03) | 62.35 (16.16) | 58.19 (6.49) |
| 4 | -1.84 | -2.21 | -19.3 | -14.4 |
that the method reduced the biases of the MLM
head and generalized well on two topic detection and two sentiment analysis datasets, outperforming several recently-developed prompt-based approaches.
## Limitations
The current work has several limitations that warrant further investigation. Firstly, due to time constraints, we did not conduct experiments using the proposed framework on few-shot settings or a more challenging multi-label classification task. Secondly, our ablation study in Section 4.4 showed that the framework with the weight assignment resulted in only a marginal improvement in performance, suggesting that SimCSE may not be the most effective method for addressing prediction bias. Therefore, future work will explore alternative modeling approaches for bias reduction. Thirdly, in Section 4.5, we noticed that several irrelevant words are also generated as keywords with the language prompt, which may negatively impact the final representation. To address this issue, a better solution, such as keyword filtering, should be considered to improve the current framework. Lastly, we treated each word as a single atomic entity in the KG embedding space, regardless of its possible different senses or meanings. A more careful treatment of word meanings is necessary to handle the problem of polysemy.
| space | news | science | rocket | space | launch | nasa | space | commercial | aerospace |
|------------|----------|------------|-------------|-------------|-------------|-----------|----------------|---------------|---------------|
| technology | featured | news | human | exploration | competition | military | innovation | entertainment | international |
| business | earth | events | engineering | news | personal | education | progress | miscellaneous | sports |
| mars | aviation | enterprise | discovery | challenges | research | games | transportation | news | robotics |
| tech | bold | planetary | humans | lunar | rockets | astro | physics | ideas | flight |
(a) AG's News example
![6_image_0.png](6_image_0.png)
Figure 3: Weight visualization examples from two topic detection datasets. The Byte-Pair Encoding (BPE) algorithm for the Roberta model may generate words that have their first letters capitalized or a special symbol added as the prefix. After the generation, we replace them with the names of the entities that they actually refer to in the KG.
Therefore, there are several duplicates in the keyword set.
![6_image_1.png](6_image_1.png)
## Acknowledgement
We express our sincere gratitude to the matched mentor in the mentoring program, as well as the anonymous reviewers, for their valuable and constructive feedback. Furthermore, we would like to acknowledge the financial support provided by the Postgraduate Research Scholarship (PGRS) at Xi'an Jiaotong-Liverpool University (contract number PGRS2006013). Additionally, this research has received partial funding from the Jiangsu Science and Technology Programme (contract number BK20221260) and the Research Development Fund at Xi'an Jiaotong-Liverpool University (contract number RDF2201132). We are grateful for their support, which has enabled us to carry out this study.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877–1901.
Yulong Chen, Yang Liu, Li Dong, Shuohang Wang, Chenguang Zhu, Michael Zeng, and Yue Zhang.
2022. AdaPrompt: Adaptive model training for prompt-based NLP. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6057–6068, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–
4186.
Hang Dong, Wei Wang, Kaizhu Huang, and Frans Coenen. 2019. Joint multi-label attention networks for social text annotation. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1348–1354, Minneapolis, Minnesota.
Association for Computational Linguistics.
Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015.
Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606–1615.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021a.
Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830, Online. Association for Computational Linguistics.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021b.
SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang.
2022. PPT: Pre-trained prompt tuning for few-shot learning. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 8410–8423, Dublin, Ireland. Association for Computational Linguistics.
Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Jingang Wang, Juanzi Li, Wei Wu, and Maosong Sun. 2022. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2225–2240, Dublin, Ireland. Association for Computational Linguistics.
Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia.
Semantic web, 6(2):167–195.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597, Online. Association for Computational Linguistics.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
ACM Computing Surveys, 55(9):1–35.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020. Text classification using label names only: A language
model self-training approach. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9006–9017, Online. Association for Computational Linguistics.
Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016.
Supervised attentions for neural machine translation.
In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 2283–2288.
Sewon Min, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Noisy channel language model prompting for few-shot text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 5316–5330.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 2463–2473.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics".
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222–4235, Online. Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: An open multilingual graph of general knowledge. In *Thirty-first AAAI conference on* artificial intelligence.
Zikang Wang, Linjing Li, and Daniel Zeng. 2020.
Knowledge-enhanced natural language inference based on knowledge graphs. In *Proceedings of the* 28th International Conference on Computational Linguistics, pages 6498–6508.
Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3914–3923, Hong Kong, China. Association for Computational Linguistics.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc.
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pages 12697–12706. PMLR. |
fujii-etal-2023-different | How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in {J}apanese | https://aclanthology.org/2023.acl-srw.5 | This paper investigates the effect of tokenizers on the downstream performance of pretrained language models (PLMs) in scriptio continua languages where no explicit spaces exist between words, using Japanese as a case study. The tokenizer for such languages often consists of a morphological analyzer and a subword tokenizer, requiring us to conduct a comprehensive study of all possible pairs. However, previous studies lack this comprehensiveness. We therefore train extensive sets of tokenizers, build a PLM using each, and measure the downstream performance on a wide range of tasks. Our results demonstrate that each downstream task has a different optimal morphological analyzer, and that it is better to use Byte-Pair-Encoding or Unigram rather than WordPiece as a subword tokenizer, regardless of the type of task. | # How Do Different Tokenizers Perform On Downstream Tasks In Scriptio Continua Languages?: A Case Study In Japanese
Takuro Fujii* Yokohama National University [email protected] Koki Shibata* University of Tsukuba [email protected] Atsuki Yamaguchi, **Terufumi Morishita** and **Yasuhiro Sogawa**
Hitachi, Ltd.
{atsuki.yamaguchi.xn,terufumi.morishita.wp,yasuhiro.sogawa.tp}@hitachi.com
## Abstract
This paper investigates the effect of tokenizers on the downstream performance of pretrained language models (PLMs) in *scriptio continua* languages where no explicit spaces exist between words, using Japanese as a case study.
The tokenizer for such languages often consists of a morphological analyzer and a subword tokenizer, requiring us to conduct a comprehensive study of all possible pairs. However, previous studies lack this comprehensiveness. We therefore train extensive sets of tokenizers, build a PLM using each, and measure the downstream performance on a wide range of tasks. Our results demonstrate that each downstream task has a different optimal morphological analyzer, and that it is better to use Byte-Pair-Encoding or Unigram rather than WordPiece as a subword tokenizer, regardless of the type of task.
## 1 Introduction
Tokenization is the first key procedure in current natural language processing when inputting a target sentence to a pretrained language model (PLM).
It generally splits an input sequence into subword units, where a subword is a fraction of a word.
Previous efforts have proposed several subwordtokenization algorithms (hereafter, subword tokenizers), such as Byte-Pair-Encoding (BPE) (Sennrich et al., 2016), WordPiece (Schuster and Nakajima, 2012), and Unigram (Kudo, 2018), and different PLMs use different subword tokenizers.1 It is widely acknowledged that tokenization affects the downstrem performance of PLMs (Rust et al., 2021; Gow-Smith et al., 2022; Bostrom and Durrett, 2020; Park et al., 2020; Toraman et al.,
2022). The majority of the previous studies have focused on languages with explicit word boundaries, such as English, while research on *scriptio con-*
* Work done while interning at Hitachi, Ltd.
1For example, BERT (Devlin et al., 2019) uses WordPiece, and GPT-3 (Brown et al., 2020) uses byte-level BPE.
![0_image_0.png](0_image_0.png)
Figure 1: Typical tokenization procedures in both *scriptio continua* languages and English tinua languages, or languages without word boundaries (like Japanese, Chinese, and Thai), is still understudied. The tokenization process in scriptio continua languages traditionally involves morphological analysis, which splits the input text into morphemes (semantic units similar to words in English) using the dictionary designed by human experts (see Step 1 in Figure 1 for an example). In this case, a tokenizer for a PLM consists of a morphological analyzer and a subword tokenizer. To investigate the impact of tokenization in this scenario, we need to perform a comprehensive study on several sets of the available pairs, which is lacking in the previous work (Bostrom and Durrett, 2020; Inoue et al., 2022; Lowphansirikul et al., 2021).
In this paper, we investigate the effect of tokenizers on the downstream performance of PLMs in scriptio continua languages, focusing on Japanese as a case study. We train an extensive collection of tokenizers consisting of known morphological analyzer and subword tokenizer pairs, use them to pretrain and fine-tune BERT models, and measure their performance on a variety of downstream tasks.
On the basis of the experimental results, we address the following three research questions. We first try to answer if we should use a morphological analyzer2in a scriptio continua language (Japanese)
(RQ1). RQ2 and RQ3 each examine whether different morphological analyzers/subword tokenizers perform differently on a downstream task.
Contributions 1) We test a comprehensive set of known morphological analyzer and subword tokenizer pairs and use various downstream tasks to clarify the effect of tokenizers on the downstream performance of Japanese PLMs. 2) Accordingly, we find the followings:
- We should use a morphological analyzer for Japanese.
- Each task seems to have its own optimal morphological analyzer(s).
- It is better to use either BPE or Unigram as a subword tokenizer rather than WordPiece.
3) We publicly release the code and PLMs.3
## 2 Japanese Tokenizer
In this section, we explain the morphological analyzers and subword tokenizers used in this paper.
## 2.1 Japanese Morphological Analyzers
Japanese morphological analyzers are based on either a pointwise or sequence prediction method.
The former tokenizes a sentence by extracting features from the characters within a pre-defined window and then predicting if a boundary exists between each character using a classifier. The latter first constructs a lattice from an input sentence on the basis of a pre-defined dictionary, where each path in the lattice represents a candidate token sequence and has a cost, and then selects the path with the lowest cumulative cost as the analysis result.4 We obtain a cost for each path using a statistical model(s) or a hand-crafted dictionary.
We test the following four widely used morphological analyzers: MeCab ⃝M (Kudo et al., 2004),
Juman++ ⃝J (Tolmachev et al., 2018), Sudachi
⃝S (Takaoka et al., 2018), and Vaporetto ⃝V (Akabe et al., 2022). The first three adopt sequence prediction while the last uses pointwise prediction.5
## 2.2 Subword Tokenizers
which differs in either vocabulary construction, tokenization algorithms, or both. These tokenizers are empirically known to produce different subword boundaries (Bostrom and Durrett, 2020).
Vocabulary Construction BPE constructs the vocabulary by merging and adding a pair of existing tokens with the highest score in the dictionary until the total number of tokens in the dictionary reaches a pre-defined size. The score is calculated based on the frequency of the existing tokens. WordPiece is similar to BPE but calculates the score based on the frequency of a symbol pair and the individual frequencies. Unigram heuristically builds a large seed vocabulary from a training corpus (e.g., by taking the most frequent substrings) and then iteratively removes the least important symbols from the vocabulary. Specifically, it first fits a unigram LM for the current vocabulary and then computes (i) the log likelihood of the training corpus with the LM
and (ii) that of the training corpus with the LM after removing a particular symbol. It then sets (i) − (ii)
as the cost, which shows the degradation of the log likelihood when the symbol is removed. Finally, it removes the symbol with the lowest degradation.
Tokenization BPE splits a word into characters and iteratively merges those with the most frequent pair into larger known symbols in the vocabulary.
WordPiece6splits a word by the longest subword starting at the beginning of the word in the dictionary and continues splitting until its end. Unigram tokenizes a word by performing Viterbi inference to select the maximum likelihood segmentation based on its vocabulary and unigram LM.
## 3 Experimental Setup7
Tokenizers We compared a total of 12 tokenizers
(four morphological analyzers and three subword tokenizers), as introduced in §2. We also considered three additional tokenizers not using morphological analyzers. We trained all tokenizers with the vocabulary size of 30k utilizing 10M sentences randomly extracted from Japanese Wikipedia.
Models We used the base configuration of BERT
(total parameters: 125M). For each tokenizer, we pretrained BERT for 500k steps with masked language modeling (Devlin et al., 2019) on the Japanese Wikipedia and CC-100 (Conneau et al.,
6We follow the longest-match-first strategy used in BERT.
7For implementation details, refer to Appendix C.
| Tokenizer | MARC-ja | JSTS | JNLI | JSQuAD | JCQA | NER | UD | Avg. | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|--------------|--------------|--------------|--------------|--------------|--------------|----------|------|
| Subword | Morphological | Accuracy | Spearman | Accuracy | F1 | Acc | F1 | LAS | |
| bert-base-japanese | 95.5±0.1 | 85.3±0.3 | 86.8±0.6 | 86.4±0.2 | 76.6±0.8 | 85.6±0.2 | 93.3±0.1 | 87.1 | |
| ⃝M MeCab | 95.4±0.2 | 84.2±0.1 | 88.0±0.4 | 90.1±0.3 | 74.1±0.7 | 83.7±0.8 | 93.6±0.1 | 87.0 | |
| ⃝J Juman++ | 95.5±0.1 | 84.6±0.4 | 87.6±0.4 | 90.1±0.2 | 73.8±0.3 | 85.1±0.6 | 93.6±0.1 | 87.2 | |
| BPE | ⃝S Sudachi | 95.5±0.1 | 84.2±0.2 | 88.2±0.3 | 90.2±0.2 | 74.2±0.6 | 83.5±0.6 | 93.8±0.1 | 87.1 |
| (B) | ⃝V Vaporetto | 95.6±0.1 | 84.8±0.2 | 87.5±0.3 | 89.9±0.2 | 74.2±1.1 | 84.1±0.9 | 93.7±0.1 | 87.1 |
| Nothing | 95.4±0.2 | 82.8±0.2 | 87.2±0.2 | 88.7±0.3 | 72.8±0.8 | 62.9±1.1 | 93.4±0.1 | 83.3 | |
| MeCab | 95.5±0.1 | 82.4±0.5 | 87.5±0.3 | 89.2±0.3 | 69.8±0.7 | 84.0±0.9 | 93.6±0.1 | 86.0 | |
| Juman++ | 95.3±0.3 | 83.3±0.3 | 87.7±0.2 | 89.8±0.3 | 71.1±0.6 | 84.7±0.5 | 93.6±0.1 | 86.5 | |
| WordPiece | Sudachi | 95.3±0.2 | 83.7±0.3 | 87.2±0.4 | 89.6±0.1 | 70.0±0.9 | 82.4±0.6 | 94.0±0.1 | 86.0 |
| (W) | Vaporetto | 95.3±0.2 | 83.6±0.1 | 88.0±0.4 | 89.7±0.2 | 71.0±0.4 | 84.0±0.8 | 93.8±0.1 | 86.5 |
| Nothing | 85.5±0.0 | N/A | 55.3±0.0 | 10.1±0.1 | 20.0±0.8 | 0.0±0.0 | 63.8±0.9 | 33.5 | |
| MeCab | 95.4±0.3 | 84.6±0.4 | 88.3±0.4 | 89.5±0.3 | 74.5±0.8 | 83.1±1.0 | 93.4±0.2 | 87.0 | |
| Juman++ | 95.4±0.2 | 84.3±0.3 | 87.8±0.3 | 89.9±0.2 | 74.9±1.2 | 84.1±0.4 | 93.4±0.1 | 87.1 | |
| Unigram | Sudachi | 95.6±0.2 | 84.8±0.5 | 88.4±0.3 | 89.9±0.1 | 74.5±0.6 | 83.0±1.3 | 93.7±0.1 | 87.1 |
| (U) | Vaporetto | 95.5±0.3 | 84.6±0.2 | 87.9±0.3 | 89.9±0.1 | 74.3±0.8 | 84.1±0.4 | 93.7±0.1 | 87.1 |
| Nothing | 95.4±0.4 | 83.9±0.3 | 87.7±0.8 | 89.3±0.1 | 74.6±0.4 | 76.9±1.0 | 93.2±0.2 | 85.9 | |
| Statistical test results: Kruskal-Wallis test (Kruskal and Wallis, 1952). ✓ if p < .05 otherwise ✗. RQ2: (B, W, U) (✗, ✗, ✗) (✓, ✓, ✗) (✓, ✗, ✗) (✗, ✗, ✗) (✗, ✓, ✗) (✓, ✓, ✗) | (✓, ✓, ✓) | | | | | | | | |
| RQ3: (⃝M , ⃝J , ⃝S , ⃝V ) | (✗, ✗, ✗, ✗) | (✓, ✓, ✓, ✓) | (✗, ✗, ✓, ✗) | (✓, ✗, ✓, ✗) | (✓, ✓, ✓, ✓) | (✗, ✗, ✗, ✗) | (✗, ✗, ✓, ✗) | | |
2020) datasets, consisting of 2.2 and 1.1M samples each with the maximum length set to 512.
Benchmarks We used the following benchmarks:
JGLUE (Kurihara et al., 2022), NER8, and Universal Dependencies (UD) Japanese-GSD (Asahara et al., 2018).9 Since the test set for JGLUE is not publicly available, we fine-tuned all models on the training set using five-fold cross-validation and evaluated their performance on the development set. Since the development and test sets are not available for NER, we split the training set into 9:1. We fine-tuned the models with five-fold cross-validation by the former and measured the performance using the latter.
## 4 Results And Analysis This Section Addresses The Three Rqs Raised In §1.
RQ1: Should we use a morphological analyzer?
Table 1 lists the results on the seven downstream tasks grouped by subword tokenizer. The average scores across tasks ("Avg.") show that tokenizers 8Dataset: stockmarkteam/ner-wikipedia-dataset 9We provide the description of each task in Appendix B. For reference, we also measured the performance of bert-base-japanese, which uses MeCab and WordPiece.
without a morphological analyzer ("Nothing") exhibited the worst results among tokenizers with the same subword tokenizer. This trend also generally holds for task-specific results. These results make intuitive sense because a morphological analyzer can provide explicit semantic boundaries of an input text, making the input units for subword tokenization similar to English words (Figure 1). This should help a model to capture the semantic and syntactic information more easily and consequently outperform those that do not use a morphological analyzer. We therefore conclude that we should use a morphological analyzer for Japanese.
In addition to the above, we observe that WordPiece + Nothing produced by far the worst results in all tasks due to the poor tokenization. WordPiece processes a sequence word by word and treats a sequence without a blank as a single word. If it fails to tokenize a particular word, it tokenizes the
"whole" as a single [UNK] token. Without a morphological analyzer, the length of a word becomes abnormally long, making WordPiece more likely to produce an [UNK] token. This means that the majority of an input text will be converted into
[UNK] tokens, thus losing almost all of the content in the text. In fact, the average sequence length
| JSTS | JNLI | JCQA | NER | UD | |
|-----------|-------------------|----------|----------|-------------------|----------------------------|
| BPE | (⃝V > ⃝M ) (⃝V > ⃝S ) | - | - | (⃝J > ⃝S ) | (⃝S > ⃝M ) (⃝S > ⃝J ) (⃝S > ⃝M ) |
| WordPiece | (⃝S > ⃝M ) | (⃝S > ⃝J ) | | | |
| (⃝V > ⃝M ) | - | - | (⃝J > ⃝S ) | (⃝V > ⃝M ) (⃝V > ⃝J ) | |
| Unigram | - | - | - | - | - |
and ratio of [UNK] per sample in pretraining were 1.15 ± 3.28 and 99.8 ± 4.9%, respectively. These caused unstable pretraining (see Appendix D).
Compared with other tasks, Nothing in NER
showed a considerable performance degradation with a maximum difference of 22.2 (Juman++ vs.
Nothing in BPE). In NER, annotations are wordlevel and tend to align well with morphemes. Since tokenizers with morphological analyzers split a morpheme into subword tokens, they can produce more linguistically motivated subword segmentation than Nothing, thus giving them an advantage.
RQ2: Do different morphological analyzers perform differently on downstream tasks? Looking at the statistical test results for RQ2 in Table 1 10, we can see that there were significant performance differences between different morphological analyzers with the same subword tokenizers in some tasks, e.g., JSTS, NER, and UD. In other words, different morphological analyzers could perform differently on different downstream tasks.
For tasks with statistical significance, we further ran the Steel-Dwass test (Douglas and Michael, 1991) to see which morphological analyzer had a significant performance difference from the others (Table 2). We can observe task-specific trends for an effective morphological analyzer(s). Specifically, for JSTS, Vaporetto performed well. For NER, Juman++ was effective. For UD, Sudachi performed well. Therefore, each task seems to have its own optimal morphological analyzer(s). RQ3: Do different subword tokenizers perform differently on downstream tasks? From the statistical test results for RQ3 in Table 1, we observe significant performance differences between subword tokenizers with the same morphologi-10Note that we omit Nothing from the following analyses.
![3_image_0.png](3_image_0.png)
cal analyzers in some tasks, such as JSTS and JCQA. "Avg." in Table 1 indicates that WordPiece performed poorly, while BPE and Unigram achieved similar results. The results of the SteelDwass test (Table 3) also confirmed that WordPiece showed significant performance degradation compared with either BPE, Unigram, or both in some tasks. We did not observe a significant difference between BPE and Unigram across all tasks. Therefore, different subword tokenizers could perform on downstream tasks differently, and it is better to use either BPE or Unigram.
We next analyze and discuss which differences in subword tokenizers produced downstream performance differences. First, we look at the difference in the vocabulary of subword tokenizers. We plot the relationship between vocabulary similarity and performance difference between two different subword tokenizers in Figure 2. The vocabulary similarity of two different subword tokenizers is computed as |V1∩V2| |V |, where |V | is the vocabulary size and V1 and V2 are the vocabularies of two subword tokenizers (T1 and T2). For each task, we computed the performance difference between the two as 15|Pi s1i −Pj s2j |, where s1i and s2j are the i-th and j-th observed scores of T1 and T2, respectively. We observe that symbols related to WordPiece ( and ▲) are plotted in the upperleft corner, while others (■) are in the lower-right corner, indicating that WordPiece has a different vocabulary composition than BPE and Unigram, and its performance difference is far larger than that between BPE and Unigram. These results are consistent with our finding that WordPiece performed poorly with statistical significance, and both BPE
and Unigram showed similar results. Therefore, it is possible that the vocabulary of a subword tokenizer has something to do with the downstream performance.
Further, while WordPiece uses a greedy longestmatch-first strategy in tokenizing a word, both BPE
| MARC-ja | JSTS | JNLI | JSQuAD | JCQA | NER | UD | |
|-----------|--------|---------|----------|---------|-----------------|------|---------|
| MeCab | - | (B > W) | (B > W) | | | | |
| (U > W) | - | (B > W) | (U > W) | - | - | | |
| Juman++ | - | (B > W) | (B > W) | | | | |
| (U > W) | - | - | (U > W) | - | - | | |
| Sudachi | - | (U > W) | (U > W) | (B > W) | (B > W) (U > W) | - | (U > W) |
| (U > W) | | | | | | | |
| Vaporetto | - | (B > W) | (B > W) | | | | |
| (U > W) | - | - | (U > W) | - | - | | |
and Unigram use a more sophisticated approach
(as explained in §2.2). This algorithmic difference might also contribute to the performance difference between different subword tokenizers.
## 5 Conclusion
To investigate the effect of tokenizers on the downstream performance of PLMs in a scriptio continua language (Japanese), we compared extensive sets of tokenizers by evaluating them on a wide range of downstream tasks and addressed the three RQs in
§1. Future work will examine how to automatically select the optimal tokenizer pair for a given task.
## Limitations
This study has the following limitations:
- We fixed the vocabulary size of each subword tokenizer to 30k. Using a different size might yield different results than those in our paper, though the effect of varying the vocabulary size for a subword tokenizer seemed to be small if the size is sufficiently large (e.g., over 16k or more) (Toraman et al., 2022).
- We have used the BERT architecture for our comparison, while there are other commonly used model architectures such as T5 (Raffel et al., 2020) and GPT-3. The investigation with these architectures is our future work.
- To investigate the impact of tokenizers on the downstream performance of PLMs in scriptio continua languages, we have taken Japanese as a case study. Other scriptio continua languages will be addressed in the future.
## Ethics Statement
This study did not involve any sensitive data but only used publicly available data, including Wikipedia, CC-100, JGLUE, Japanese NER, and UD as explained in the paper. Although we plan to release the resulting models, they might perform unfairly in some circumstances, as reported in Baldini et al. (2022). We highly recommend users to refer to studies on debiasing PLMs, such as Guo et al. (2022).
## Acknowledgements
We would like to thank anonymous reviewers, Yuta Koreeda, and Yuichi Sasazawa for their insightful comments. We also would like to thank Dr.
Masaaki Shimizu for the maintenance and management of the large computational resources used in this paper.
## References
Koichi Akabe, Shunsuke Kanda, Yusuke Oda, and Shinsuke Mori. 2022. Vaporetto: Fast japanese tokenizer based on pointwise prediction (in Japanese). In *Proceedings of the 28th Annual Meeting of the Association for Natural Language Processing*.
Masayuki Asahara, Hiroshi Kanayama, Takaaki Tanaka, Yusuke Miyao, Sumire Uematsu, Shinsuke Mori, Yuji Matsumoto, Mai Omura, and Yugo Murawaki. 2018.
Universal Dependencies version 2 for Japanese. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC
2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Moninder Singh, and Mikhail Yurochkin.
2022. Your fairness may vary: Pretrained language model fairness in toxic text classification. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2245–2262, Dublin, Ireland.
Association for Computational Linguistics.
Kaj Bostrom and Greg Durrett. 2020. Byte pair encoding is suboptimal for language model pretraining. In
Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4617–4624, Online.
Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Critchlow E. Douglas and Fligner A. Michael. 1991. On distribution-free multiple comparisons in the one-way analysis of variance. Communications in Statistics -
Theory and Methods, 20(1):127–139.
Timothy Dozat and Christopher D. Manning. 2017.
Deep biaffine attention for neural dependency parsing. In *International Conference on Learning Representations*.
Edward Gow-Smith, Harish Tayyar Madabushi, Carolina Scarton, and Aline Villavicencio. 2022. Improving tokenisation by alternative treatment of spaces.
In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 11430–11443, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012–1023, Dublin, Ireland. Association for Computational Linguistics.
Seiichi Inoue, Nguyen Tung, Akifumi Nakamachi, Shengzhe Li, and Toshinori Sato. 2022. Investigation of the impact of tokenizers using japanese gpt
(in Japanese). In *Proceedings of the 28th Annual* Meeting of the Association for Natural Language Processing.
Phillip Keung, Yichao Lu, György Szarvas, and Noah A.
Smith. 2020. The multilingual Amazon reviews corpus. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 4563–4568, Online. Association for Computational Linguistics.
William H. Kruskal and W. Allen Wallis. 1952. Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47(260):583–
621.
Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Association for Computational Linguistics.
Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto.
2004. Applying conditional random fields to Japanese morphological analysis. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 230–237, Barcelona, Spain. Association for Computational Linguistics.
Kentaro Kurihara, Daisuke Kawahara, and Tomohide Shibata. 2022. JGLUE: Japanese general language understanding evaluation. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 2957–2966, Marseille, France. European Language Resources Association.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Lalita Lowphansirikul, Charin Polpanumas, Nawat Jantrakulchai, and Sarana Nutanong. 2021.
Wangchanberta: Pretraining transformer-based thai language models. *CoRR*, abs/2101.09635.
Takashi Miyazaki and Nobuyuki Shimizu. 2016. Crosslingual image caption generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1780–1790, Berlin, Germany. Association for Computational Linguistics.
Kyubyong Park, Joohong Lee, Seongbo Jang, and Dawoon Jung. 2020. An empirical study of tokenization strategies for various Korean NLP tasks. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 133–142, Suzhou, China. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1).
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Phillip Rust, Jonas Pfeiffer, Ivan Vulic, Sebastian Ruder, ´
and Iryna Gurevych. 2021. How good is your tokenizer? on the monolingual performance of multilingual language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 3118–3135, Online. Association for Computational Linguistics.
Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In *2012 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149–5152.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: An open multilingual graph of general knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1).
Kazuma Takaoka, Sorami Hisamoto, Noriko Kawahara, Miho Sakamoto, Yoshitaka Uchida, and Yuji Matsumoto. 2018. Sudachi: a Japanese tokenizer for
business. In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)*, Miyazaki, Japan. European Language Resources Association (ELRA).
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics.
Arseny Tolmachev, Daisuke Kawahara, and Sadao Kurohashi. 2018. Juman++: A morphological analysis toolkit for scriptio continua. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 54–59, Brussels, Belgium. Association for Computational Linguistics.
Cagri Toraman, Eyup Halit Yilmaz, Furkan ¸Sahinuç, and Oguzhan Ozcelik. 2022. Impact of tokenization on language models: An analysis for turkish.
Andrew Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. *IEEE Transactions on Information Theory*,
13(2):260–269.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Daniel Zeman, Jan Hajic, Martin Popel, Martin Potthast, ˇ
Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–21, Brussels, Belgium. Association for Computational Linguistics.
Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, ˇ
Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinková, Jan Hajic jr., Jaroslava ˇ
Hlavácová, Václava Kettnerová, Zde ˇ nka Urešová, ˇ
Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, MarieCatherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, Héctor Martínez Alonso, Çagrı Çöltekin, ˘
Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonça, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–19, Vancouver, Canada. Association for Computational Linguistics.
## Appendices A Japanese Morphological Analyzers
MeCab (Kudo et al., 2004) MeCab tokenizes a sentence by first constructing a lattice on the basis of its dictionary and then selecting the combination with the lowest cumulative cost using the Viterbi algorithm (Viterbi, 1967). The cost is calculated using a pre-defined feature function in sequence labeling.
Juman++ (Tolmachev et al., 2018) Juman++ tokenizes a sentence by constructing a lattice in accordance with the dictionary and subsequently selecting the path with the highest score by beam search.
The score is calculated using both a RNN-based language model and a feature-based linear model.
Sudachi (Takaoka et al., 2018) Sudachi puts an emphasis on offering a tokenizer and dictionary for business use, enabling us to select tokens of different granularity for each application. We use the "Middle" unit of granularity, which is similar to words in general sense.
Vaporetto (Akabe et al., 2022) Vaporetto tokenizes a sentence by extracting features from the characters within a pre-defined window and subsequently classifying if a boundary exists between each character with a linear classification model.
## B Downstream Tasks
We briefly describe the seven downstream tasks used in this paper. The statistics for each task dataset are presented in Table 4.
MARC-ja A binary classification task to predict whether a product review is positive or negative.
The dataset is based on the Japanese part of the Multilingual Amazon Reviews Corpus (MARC) (Keung et al., 2020).
JSTS A regression task to predict a semantic similarity score between two sentences. The score ranges from 0 (least similar) to 5 (most similar).
The data were sourced from the Japanese version of the MS COCO Caption Dataset (Chen et al.,
2015) and the YJ Captions Dataset (Miyazaki and Shimizu, 2016).
JNLI A three-way classification task to predict an inference relation between two sentences. The relation includes "contradiction," "neutral," and
"entailment," the same as in SNLI (Bowman et al.,
2015). The data source was the same as that for JSTS.
JSQuAD A question answering task to predict a corresponding answer span given a question and context. The data were sourced from Japanese articles in Wikipedia and its construction process is based on SQuAD v1.1 (Rajpurkar et al., 2016).
JCommonsenseQA A multiple-choice question answering task to select the best choice from five choices given a question. JCommonsenseQA is a Japanese version of CommonsenseQA (Talmor et al., 2019), and it was constructed in the same manner as in CommonsenseQA, which used the multilingual knowledge base: ConceptNet (Speer et al., 2017) as seeds.
NER A task to identify and categorize named entities in a given sentence. The data were sourced from Japanese articles in Wikipedia and annotated by Stockmark Inc. The dataset is available at https://github.com/stockmarkteam/
ner-wikipedia-dataset.
UD A dependency parsing task to predict the syntactic dependency structure of a given sentence (Zeman et al., 2017, 2018). The output is a directed tree originating out of a root node. Each edge in the tree has a label that defines a grammatical relationship between two words.
## C Implementation Details
We implemented our tokenizers with the Tokenizers library11 and our models using the PyTorch
(Paszke et al., 2019) and Transformers (Wolf et al.,
2020) libraries. We trained our models with four NVIDIA V100 (32GB) GPUs for pretraining and one for fine-tuning. We used automatic mixed precision (FP16) provided by PyTorch as default. The code is available on the GitHub: https://github.
com/hitachi-nlp/compare-ja-tokenizer, and the models are available on the Hugging Face Hub:
https://huggingface.co/hitachi-nlp.
## C.1 Data
We downloaded Wikipedia data from https://www.tensorflow.org/datasets/
catalog/wikipedia\#wikipedia20201201ja.
As its preprocessing step, we excluded sentences with less than 30 characters and those containing
"Category" or table symbols.
11https://github.com/huggingface/tokenizers
| Dataset | License | Task Type | Number of samples | | |
|---------------------|------------------------------|--------------------------|---------------------|-----|-----|
| Train | Dev | Test | | | |
| Text classification | 187,528 | 5,654 | - | | |
| JSTS | Sentence pair classification | 12,451 | 1,457 | - | |
| JNLI | Sentence pair classification | 20,073 | 2,434 | - | |
| JSQuAD | Question answering | 62,859 | 4,442 | - | |
| JCommonsenseQA | Question answering | 8,939 | 1,119 | - | |
| Japanese NER | CC-BY-SA 3.0 | Named entity recognition | 5,343 | - | - |
| UD-Japanese-GSD | CC BY-SA 4.0 | Dependency parsing | 7,050 | 507 | 543 |
| MARC-ja | | | | | |
| JGLUE | CC BY-SA 4.0 | | | | |
Table 4: Statistics for each dataset used in this paper. Note that the test sets are not currently publicly available for JGLUE. Japanese NER does not have the corresponding development and test sets.
| Hyperparameter | Value | Hyperparameter | Value |
|------------------------------------------|--------------------------------------------------------------------------|------------------|---------|
| Batch size | 128 | | |
| Total training steps | 500,000 | | |
| Adam ϵ | 1e-8 | | |
| Adam β1 | 0.9 | | |
| Adam β2 | 0.999 | | |
| Sequence length | 512 | | |
| Learning rate | 1e-4 | | |
| Learning rate schedule | Linear warmup | | |
| Warmup steps | 10,000 | | |
| Weight decay | 0.01 | | |
| Attention dropout | 0.1 | | |
| Dropout | 0.1 | Batch size | 32 |
| Epochs | 5 for JGLUE tasks & NER 10 for UD | | |
| Adam ϵ | 1e-8 | | |
| Adam β1 | 0.9 | | |
| Adam β2 | 0.999 | | |
| Sequence length | 512 for MARC-ja & UD 348 for JSQuAD 128 for JSTS, JNLI & NER 64 for JCQA | | |
| Learning rate | 3e-5 for JGLUE tasks & NER 5e-5 for BERT in UD 1e-3 for BAP in UD | | |
| Learning rate schedule | Linear warmup | | |
| Warmup steps | 10% of steps | | |
| Weight decay | 0.01 | | |
| Attention dropout | 0.1 | | |
| Dropout | 0.1 | | |
| Table 5: Hyperparameters for pretraining | | | |
C.2 Model
We used the base configuration of BERT (12 hidden layers and attention heads, Dimhidden = 768, Dimintermediate = 3072, Total parameters = 125M).
## C.3 Pretraining
We pretrained all models for 500k steps and optimized them with AdamW (Loshchilov and Hutter, 2019). We mostly followed the configurations of Devlin et al. (2019). Table 5 lists the hyperparameter settings used in pretraining.
## C.4 Fine-Tuning
Table 6 lists the hyperparameters for fine-tuning models on the JGLUE, NER, and UD datasets. For UD, we trained a deep biaffine attention parser
(Dozat and Manning, 2017) built on top of the PLMs. We computed an average for each token over the top four layers of the BERT hidden representations and used it as an input to a biaffine attention parser (BAP). The dimensionalities of arc and relation features given to each biaffine module are 500 and 100, respectively. We used the SuPar library12 to implement the parser and followed its 12https://github.com/yzhangcs/parser Table 6: Hyperparameters for fine-tuning default hyperparameter configurations.
## D Pretraining Loss
Figure 3 shows the pretraining loss curves for our models grouped by morphological analyzer. We can see that WordPiece + Nothing was unstable in pretraining.
![10_image_0.png](10_image_0.png)
|
lyu-etal-2023-semantic | Semantic-Aware Dynamic Retrospective-Prospective Reasoning for Event-Level Video Question Answering | https://aclanthology.org/2023.acl-srw.7 | Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers. However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames. Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering. Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA. Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code is publicly available at \url{https://github.com/lyuchenyang/Semantic-aware-VideoQA}. | # Semantic-Aware Dynamic Retrospective-Prospective Reasoning For Event-Level Video Question Answering
Chenyang Lyu† Tianbo Ji‡∗ Yvette Graham¶ **Jennifer Foster**†
† School of Computing, Dublin City University, Dublin, Ireland
‡ Nantong University, China
¶ School of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland [email protected], [email protected], [email protected] [email protected]
## Abstract
Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers.
However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames.
Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering.
Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA.
Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code is publicly available at https://github.com/
lyuchenyang/Semantic-aware-VideoQA.
## 1 Introduction
This paper focuses on one specific variant of Video Question Answering (VQA) (Xu et al., 2016; Yu et al., 2018; Zhong et al., 2022), namely Eventlevel VQA (EVQA) (Xu et al., 2021). In general, the objective of the VQA task is to provide an answer to a visual-related question according to the content of an accompanying video. Despite significant recent progress in VQA, EVQA
still remains one of the most challenging VQAbased tasks since it requires complex reasoning over the *events* across video frames (Sadhu et al., 2021; Zhong et al., 2022; Liu et al., 2022). To
∗*corresponding author tackle the challenges in EVQA, a number of approaches have been proposed (Xu et al., 2021).
Luo et al. (2022) propose a temporal-aware bidirectional attention mechanism for improving event reasoning in videos, while Zhang et al. (2022) propose a novel model named Energy-based Refinedattention Mechanism (ERM), which obtains better performance compared to previous approaches with a smaller model size. Liu et al. (2022), on the other hand, incorporate visual-linguistic causal dependencies based on Graph Convolutional Networks (Kipf and Welling, 2017) for enhancing cross-modal event reasoning for EVQA.
Despite recent advances, conventional EVQA approaches generally fail to take into account the explicit semantic connection between questions and the corresponding visual information at the event level. Therefore, we propose a new approach that takes advantage of such semantic connections, using the Semantic Role Labeling (SRL) (Màrquez et al., 2008; Palmer et al., 2010; He et al., 2017)
structure of questions. The model uses SRL information to learn an explicit semantic connection between the text-based questions and visual information in videos. Additionally, we carry out a multi-step reasoning mechanism over video frames to avoid adapting to spurious correlation and shortcuts by explicitly learning the reasoning process itself (Yi et al., 2018; Zhang et al., 2021; Picco et al., 2021; Hamilton et al., 2022; Zhu, 2022).
Specifically, in each reasoning step, the model should explicitly decide which frame should be focused on by predicting the reasoning direction (retrospective or *prospective*). In terms of the question, in each reasoning step, we focus on one or more specific SRL arguments with high attention weights, and model its connection with the visual information (i.e., video frames) contained within the corresponding video. For example, for a question such as [**ARG1***: How many cars] were [Verb:*
involved] [**ARG2***: in the accident?]*, the model con-
![1_image_0.png](1_image_0.png)
centrates on the *ARG2* when locating the accident, before determining how many cars were involved in the accident (*ARG1*). In a specific reasoning step, t, we inject the relevant visual information based on the semantic connection between the question and video frames by updating a hidden vector. This vector is ultimately expected to contain the necessary information for predicting the correct answer. In the reasoning process, we employ a *coverage mechanism* (Tu et al., 2016) to improve the coverage of the SRL arguments of question. Namely, instead of simply focusing on a small number of specific arguments, the model is capable of including a large range of arguments.
To investigate the effectiveness of the proposed approach, we conduct experiments on a benchmark EVQA dataset: TrafficQA. Results reveal the model to achieve performance superior to that of existing baselines for a range of reasoning types (e.g.,
counterfactual, prospective).
## 2 Methodology
An overview of our approach is shown in Figure 1.
Suppose the input of our model consists of a video V composed of n image frames sampled from it: V = {f0, f1*, ......, f*n−1}, and a corresponding question Q = {w0, w1*, ......, w*m−1} with associated SRL arguments S = {S0, S1*, ......, S*N−1}
where Si = {wi, wi+1*, ......, w*k}. All frames V = {f0, f1*, ......, f*n−1} are fed into an IMAGE
ENCODER followed by temporal attention modeling to produce temporal-aware frame representations V
′= {f
′
0
, f′1
, ......, f′n−1} ∈ Rn×d. Meanwhile, we use a TEXT ENCODER to obtain the representations of the question with its corresponding SRL arguments: Q
′∈ R1×dand S
′∈ RN×d.
We then perform multi-step reasoning in which we iteratively update the hidden state vector h with the visual information from frame representations based on the attention weights between them and the SRL arguments of the question. h is updated from the initial step h0 to the final step hT −1 where T is the total number of reasoning steps. Finally, we predict the most probable answer a based on hT −1.
## 2.1 Multi-Step Reasoning
Before the first reasoning step, we initialize:
$$h_{0}=A t t n(Q^{'},V^{'},V^{'})\qquad\qquad(1)$$ $$j=a r g m a x(A t t n W e i g h t s(Q^{'},V^{'},V^{'}))\quad(2)$$
where *Attn* serves as the q, k, v *attention*1 modeling (Vaswani et al., 2017) and j represents the 1In this work, we use a low temperature τ in the *softmax* to encourage the model to assign more attention weights to the most relevant frame.
index of the frame with the highest attention weight.
In each specific reasoning step t, we firstly use ht−1 as the *attention key* to obtain the relevant SRL argument: S
′
t = Attn(ht−1, S′, S′). Subsequently, we infer the next focused frame by:
$$V^{f o c u s}=A t t n(r_{t},V^{'},V^{'})$$
where rt = g(ht−1, S′t). Finally, we update the hidden state vector ht−1 based on the currently focused frame (the frame with the largest attention weight):
ht = δ(ht−1, V *f ocus*) (4)
## 2.2 Retrospective-Prospective Reasoning
We propose a *Retrospective-Prospective Reasoning* mechanism for Eq.3 in order to explicitly decide whether the model should move to future frames (*prospective reasoning*) or move back to previous frames (*retrospective reasoning*). We obtain the *retrospective frame* V
retro and *prospective* frame V
prosp by:
$$V^{retro}=\psi(g(h_{t-1},S^{{}^{\prime}}_{t}),V^{{}^{\prime}},RetroMask(j))\tag{5}$$ $$V^{prosp}=\phi(g(h_{t-1},S^{{}^{\prime}}_{t}),V^{{}^{\prime}},ProspMask(j))\tag{6}$$
where ψ and ϕ are MASKED ATTENTION that are used to obtain *retrospective* and *prospective* frames, g(ht−1, S′t) and V
′serve as *query* and key, value respectively. *RetroM ask*(j) means all frames after j (fi>j ) will be masked whereas P rospM ask(j) means that all frames before j (fi<j ) will be masked. After obtaining V
retro and V
prosp we generate a probability:
$$p=\sigma(\lambda(V^{r e t r o},V^{p r o s p}))$$
retro, V *prosp*)) (7)
If p is larger than a pre-defined threshold α, we update ht = δ(ht−1, V *retro*) ,otherwise we update ht = δ(ht−1, V *prosp*) as in Eq. 4. The index for the next-focused frame j is also updated accordingly.
The reasoning process is shown in Algorithm 1.
## 2.3 Coverage Mechanism
We additionally propose to employ a coverage mechanism (Tu et al., 2016) to encourage the model to include as many SRL arguments as possible in the reasoning process. Specifically, we track the attention distribution Ct ∈ R1×N of ht−1 on all SRL arguments S
$$C_{t}=C_{t-1}+{\frac{A t t n W e i g h t s([h_{t-1};C_{t-1}],S^{'},S^{'})}{\chi}}\quad()$$
Algorithm 1: Multi-step dynamic
retrospective-prospective reasoning with
coverage mechanism
V
′= {f0, f1*, ......, f*n−1}: representations of video
frames
Q
′: question
S
′: SRL representations of Q
T: reasoning steps
χ : normalization factor
α: threshold of the probability for using retrospective
frame
h0 = *Attn(Q*
′, V
′, V
′)
j = *argmax(AttnW eights(Q*
′, V
′, V
′))
C0 = 0
for i in T do
### Find T as $S_i'=Attn(h_{i-1},S',S',C_{i-1})\\ C_i=C_{i-1}+\frac{AttnWeights(h_{i-1},S',S',C_{i-1})}{\chi}\\ V^{retro}=\psi(g(h_{t-1},S'_i),V',RetroMask(j))\\ V^{prosp}=\phi(g(h_{i-1},S'_i),V',ProspMask(j))\\ p=\sigma(f(V^{retro},V^{prosp}))\\ \textbf{if}\ p>\alpha\ \textbf{then}\\ \left|\begin{array}{c}h_i=\delta(h_{i-1},V^{retro})\\ j=argmax(\psi(g(h_{t-1},S'_i),V',RetroMask(j)))\end{array}\right.\\ \textbf{else}\\ h_i=\delta(h_{i-1},V^{prosp})\\ j=argmax(\phi(g(h_{i-1},S'_i),V',ProspMask(j)))\end{array}$
where χ represents the normalization factor.2 We obtain the weighted S
′
t by S
′
t =
Attn([ht−1; Ct−1], S′, S′) where we concatenate Ct−1 to ht−1 as an additional input to the *Attn* function for the purpose of informing the model to assign more attention weights to previously lessfocused SRL arguments, in order to improve the coverage for all SRL arguments.
## 2.4 Training Objective
For the answer prediction, we encode all answer options A = {a0*, ......, a*M−1} separately and then select the one with the highest similarity with hT −1.
We optimize our model parameters θ using *Cross* Entropy loss:
$$J(\theta)=-\sum_{i}\sum_{k}log\frac{e^{F\left(a_{k},h_{T-1}\right)}}{\sum_{j=0}^{M-1}e^{F\left(a_{j},h_{T-1}\right)}}y_{i,k}\tag{9}$$
where is the function measuring the similarity between answer candidate and hT −1, and yi,k represents the answer label for the i−th example
- if the correct answer for the i−th example is the k−th answer then yi,k is 1 otherwise it is 0.
2In this work, we use the number of SRL arguments of the corresponding question as the normalization factor.
| Models | Setting-1/4 | Setting-1/2 |
|-----------------------------------|---------------|---------------|
| Q-type (random) (Xu et al., 2021) | 25.00 | 50.00 |
| QE-LSTM (Xu et al., 2021) | 25.21 | 50.45 |
| QA-LSTM (Xu et al., 2021) | 26.65 | 51.02 |
| Avgpooling (Xu et al., 2021) | 30.45 | 57.50 |
| CNN+LSTM (Xu et al., 2021) | 30.78 | 57.64 |
| I3D+LSTM (Xu et al., 2021) | 33.21 | 54.67 |
| VIS+LSTM (Ren et al., 2015) | 29.91 | 54.25 |
| BERT-VQA (Yang et al., 2020) | 33.68 | 63.50 |
| TVQA (Lei et al., 2018) | 35.16 | 63.15 |
| HCRN (Le et al., 2020a) | 36.49 | 63.79 |
| Eclipse (Xu et al., 2021) | 37.05 | 64.77 |
| ERM (Zhang et al., 2022) | 37.11 | 65.14 |
| TMBC (Luo et al., 2022) | 37.17 | 65.14 |
| CMCIR (Liu et al., 2022) | 38.58 | N/A |
| Ours | 43.19 | 71.63 |
Table 1: Evaluation results on TrafficQA dataset.
## 3 Experiments 3.1 Dataset
We employ a benchmark dataset for EVQA - TrafficQA (Xu et al., 2021) which contains 62,535 QA
pairs and 10,080 videos. We follow the standard split of TrafficQA - 56,460 pairs for training and 6,075 pairs for evaluation. We further sample 5,000 examples from training data as the dev set.
## 3.2 Experimental Setup
We use CLIP ViT-B/16 (Radford et al., 2021)
3 to initialize our image encoder and text encoder.
We evenly sample 10 frames from each video in the TrafficQA dataset. The SRL parser employed in the experiments is from AllenNLP (Gardner et al., 2018; Shi and Lin, 2019). We train our model over 10 epochs with a learning rate of 1 × 10−6and a batch size of 8. The optimizer is AdamW (Loshchilov and Hutter, 2019). We set the maximum reasoning step T to 3 and we use a temperature τ of 0.2 in *Attention* modeling. The hyper-parameters are empirically selected based on the performance on dev set. There are two experimental settings for TrafficQA (Xu et al., 2021): 1)
Setting-1/2, this task is to predict whether an answer is correct for a given question based on videos; 2) Setting-1/4: this task follows the standard setup of multiple-choice task in which the model is expected to predict the correct the answer from the four candidate options.
## 3.3 Results
The experimental results on the test set of TrafficQA are shown in Table 1, where we also include the previous baseline models for EVQA.4 The results show that our proposed approach obtains accuracy of 43.19 under the multiple-choice setting, which surpasses previous state-of-the-art approaches including Eclipse (Xu et al., 2021),
ERM (Zhang et al., 2022), TMBC (Luo et al., 2022)
and CMCIR (Liu et al., 2022) by at least 4.5 points.
Furthermore, our approach achieves an accuracy of 71.63 under Setting 1/2, outperforming previous strong baselines by at least 6 points. The results show the effectiveness of our proposed multi-step reasoning approach for event-level VideoQA.
Ablation Study We conduct experiments on the dev set of TrafficQA, investigating the contribution of both the *retrospective-prospective reasoning* and coverage mechanism on the performance of our proposed EVQA approach. The results are shown in Table 3, which reveals that multi-step reasoning is critical in terms of model performance while the coverage mechanism can provide additional, albeit less substantial, improvements.
Results by Question Type We take a closer look at model performance on different question types, e.g. reverse reasoning, counterfactual reasoning, etc. The results are shown in Table 2. They reveal that our proposed approach outperforms previous state-of-the-art models on all individual question types by a large margin with large improvements seen for introspection, *reverse* and *counterfactual* questions.
Effect of Reasoning Steps We study the effect of varying reasoning steps. The results are shown in Table 4. Increasing reasoning steps improves performance, especially from 1 step to 3 steps. Additionally, the performance (both Setting 1/4 and 1/2) is stable with reasoning steps exceeding three.
## 4 Conclusion And Future Work
In this paper, we propose a multi-step dynamic retrospective-prospective approach for EVQA. Our approach employs a multi-step reasoning model that explicitly learns reasoning based on the semantic connection of the SRL structure of a question and corresponding video frames. We additionally proposed a *coverage mechanism* to improve the coverage of SRL arguments in the reasoning process. Experimental results show that the proposed
| Method | Question Type | | | | | | |
|-----------------------------|-----------------|---------------|----------------|-------------|---------|-------|-------|
| Basic | Attribution | Introspection | Counterfactual | Forecasting | Reverse | All | |
| HCRN (Le et al., 2020b) | 34.17 | 50.29 | 33.40 | 40.73 | 44.58 | 50.09 | 36.26 |
| VQAC (Kim et al., 2021) | 34.02 | 49.43 | 34.44 | 39.74 | 38.55 | 49.73 | 36.00 |
| MASN(Seo et al., 2021) | 33.83 | 50.86 | 34.23 | 41.06 | 41.57 | 50.80 | 36.03 |
| DualVGR (Wang et al., 2021) | 33.91 | 50.57 | 33.40 | 41.39 | 41.57 | 50.62 | 36.07 |
| CMCIR (Liu et al., 2022) | 36.10 | 52.59 | 38.38 | 46.03 | 48.80 | 52.21 | 38.58 |
| Ours | 37.05 | 52.68 | 43.91 | 50.81 | 54.26 | 55.52 | 43.19 |
Table 2: Results by various *question type* on the dev set of TrafficQA. The highest performance are in bold.
Table 3: Ablation study results on TrafficQA dev set, where MR represents *Multi-step Reasoning* and CM represents *Coverage Mechanism*. MR and CM are coupled in our approach.
| Models | Setting-1/4 | Setting-1/2 |
|---------------------|---------------|---------------|
| Model w/o MR and CM | 42.53 | 69.61 |
| Model w/o CM | 46.15 | 74.97 |
| Model | 47.38 | 75.83 |
Table 4: The effect of various reasoning steps.
| Reasoning Steps | Setting-1/4 | Setting-1/2 |
|-------------------|---------------|---------------|
| Model w/ 1 step | 41.57 | 71.46 |
| Model w/ 2 steps | 44.21 | 74.95 |
| Model w/ 3 steps | 47.38 | 75.83 |
| Model w/ 4 steps | 47.23 | 75.96 |
| Model w/ 5 steps | 47.15 | 75.87 |
approach obtains superior performance compared to that of state-of-the-art EVQA models.
## Acknowledgements
This work was funded by Science Foundation Ireland through the SFI Centre for Research Training in Machine Learning (18/CRT/6183). We thank the reviewers for helpful feedback.
## Limitations
This papers focuses on a variety of VideoQA -
event-level VideoQA, we only incorporate *event* information from the question (textual) side as we think that parsing video frames is inaccurate and could introduce unexpected errors, we should also explore how to inject *event-level* information from visual side in the future with more competitive visual parsing models. Our experiments are only conducted on one dataset due to resource constraint, we should also conduct experiments on more datasets to verify the effectiveness of our approach.
## References
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018.
AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6, Melbourne, Australia. Association for Computational Linguistics.
Kyle Hamilton, Aparna Nayak, Bojan Božic, and Luca ´
Longo. 2022. Is neuro-symbolic ai meeting its promise in natural language processing? a structured review. *arXiv preprint arXiv:2202.12205*.
Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what's next. In *Proceedings of the 55th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473–483, Vancouver, Canada. Association for Computational Linguistics.
Nayoung Kim, Seong Jong Ha, and Je-Won Kang. 2021.
Video question answering using language-guided deep compressed-domain video feature. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1708–1717.
Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In *5th International Conference on Learning Representations, ICLR 2017, Toulon, France,*
April 24-26, 2017, Conference Track Proceedings.
OpenReview.net.
Thao Minh Le, Vuong Le, Svetha Venkatesh, and Truyen Tran. 2020a. Hierarchical conditional relation networks for video question answering. In *Proceedings of the IEEE/CVF conference on computer* vision and pattern recognition, pages 9972–9981.
Thao Minh Le, Vuong Le, Svetha Venkatesh, and Truyen Tran. 2020b. Hierarchical conditional relation networks for video question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9972–9981.
Jie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg.
2018. Tvqa: Localized, compositional video question answering. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 1369–1379.
Yang Liu, Guanbin Li, and Liang Lin. 2022.
Cross-modal causal relational reasoning for eventlevel visual question answering. arXiv preprint arXiv:2207.12647.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Yuanmao Luo, Ruomei Wang, Fuwei Zhang, Fan Zhou, and Shujin Lin. 2022. Temporal-aware mechanism with bidirectional complementarity for video q&a.
In *2022 IEEE International Conference on Systems,*
Man, and Cybernetics (SMC), pages 3273–3278.
IEEE.
Lluís Màrquez, Xavier Carreras, Kenneth C Litkowski, and Suzanne Stevenson. 2008. Semantic role labeling: an introduction to the special issue.
Martha Palmer, Daniel Gildea, and Nianwen Xue. 2010.
Semantic role labeling. Synthesis Lectures on Human Language Technologies, 3(1):1–103.
Gabriele Picco, Thanh Lam Hoang, Marco Luca Sbodio, and Vanessa Lopez. 2021. Neural unification for logic reasoning over natural language. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3939–3950, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763.
PMLR.
Mengye Ren, Ryan Kiros, and Richard Zemel. 2015.
Exploring models and data for image question answering. *Advances in neural information processing* systems, 28.
Arka Sadhu, Kan Chen, and Ram Nevatia. 2021. Video question answering with phrases via semantic roles. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2460–2478, Online. Association for Computational Linguistics.
Ahjeong Seo, Gi-Cheon Kang, Joonhan Park, and Byoung-Tak Zhang. 2021. Attend what you need:
Motion-appearance synergistic networks for video question answering. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 6167–6177, Online. Association for Computational Linguistics.
Peng Shi and Jimmy J. Lin. 2019. Simple bert models for relation extraction and semantic role labeling.
ArXiv, abs/1904.05255.
Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In *Proceedings of the 54th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 76–85, Berlin, Germany. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Jianyu Wang, Bing-Kun Bao, and Changsheng Xu. 2021.
Dualvgr: A dual-visual graph reasoning unit for video question answering. *IEEE Transactions on* Multimedia, 24:3369–3380.
Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In *Proceedings of the IEEE conference on computer vision and pattern recognition*,
pages 5288–5296.
Li Xu, He Huang, and Jun Liu. 2021. Sutd-trafficqa: A
question answering benchmark and an efficient network for video reasoning over traffic events. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pages 9878–9888.
Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, and Haruo Takemura. 2020. Bert representations for video question answering. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1556–1565.
Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Josh Tenenbaum. 2018. Neuralsymbolic vqa: Disentangling reasoning from vision and language understanding. *Advances in neural* information processing systems, 31.
Youngjae Yu, Jongseok Kim, and Gunhee Kim. 2018.
A joint sequence fusion model for video question answering and retrieval. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pages 471–487.
Fuwei Zhang, Ruomei Wang, Fan Zhou, and Yuanmao Luo. 2022. Erm: Energy-based refined-attention mechanism for video question answering. *IEEE*
Transactions on Circuits and Systems for Video Technology.
Xi Zhang, Feifei Zhang, and Changsheng Xu. 2021.
Explicit cross-modal representation learning for visual commonsense reasoning. *IEEE Transactions on* Multimedia, 24:2986–2997.
Yaoyao Zhong, Wei Ji, Junbin Xiao, Yicong Li, Weihong Deng, and Tat-Seng Chua. 2022. Video question answering: Datasets, algorithms and challenges.
arXiv preprint arXiv:2203.01225.
Zihao Zhu. 2022. From shallow to deep: Compositional reasoning over graphs for visual question answering.
In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pages 8217–8221. IEEE. |
sugimoto-etal-2023-jamp | Jamp: Controlled {J}apanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models | https://aclanthology.org/2023.acl-srw.8 | Natural Language Inference (NLI) tasks involving temporal inference remain challenging for pre-trained language models (LMs). Although various datasets have been created for this task, they primarily focus on English and do not address the need for resources in other languages. It is unclear whether current LMs realize the generalization capacity for temporal inference across languages. In this paper, we present Jamp, a Japanese NLI benchmark focused on temporal inference. Our dataset includes a range of temporal inference patterns, which enables us to conduct fine-grained analysis. To begin the data annotation process, we create diverse inference templates based on the formal semantics test suites. We then automatically generate diverse NLI examples by using the Japanese case frame dictionary and well-designed templates while controlling the distribution of inference patterns and gold labels. We evaluate the generalization capacities of monolingual/multilingual LMs by splitting our dataset based on tense fragments (i.e., temporal inference patterns). Our findings demonstrate that LMs struggle with specific linguistic phenomena, such as habituality, indicating that there is potential for the development of more effective NLI models across languages. | # Jamp**: Controlled Japanese Temporal Inference Dataset For** Evaluating Generalization Capacity Of Language Models
Tomoki Sugimoto1, Yasumasa Onoe2**, Hitomi Yanaka**1 1The University of Tokyo, 2The University of Texas at Austin
{sugimoto.tomoki,hyanaka}@is.s.u-tokyo.ac.jp [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Natural Language Inference (NLI) tasks involving temporal inference remain challenging for pre-trained language models (LMs). Although various datasets have been created for this task, they primarily focus on English and do not address the need for resources in other languages. It is unclear whether current LMs realize the generalization capacity for temporal inference across languages. In this paper, we present JAMP, a Japanese NLI benchmark focused on temporal inference. Our dataset includes a range of temporal inference patterns, which enables us to conduct fine-grained analysis. To begin the data annotation process, we create diverse inference templates based on the formal semantics test suites. We then automatically generate diverse NLI examples by using the Japanese case frame dictionary and well-designed templates while controlling the distribution of inference patterns and gold labels. We evaluate the generalization capacities of monolingual/multilingual LMs by splitting our dataset based on tense fragments (i.e., temporal inference patterns). Our findings demonstrate that LMs struggle with specific linguistic phenomena, such as habituality, indicating that there is potential for the development of more effective NLI models across languages.
## 1 Introduction
Natural Language Inference (NLI) is the task of determining whether a set of premises entail a hypothesis. NLI involving temporal inference is a challenging task and remains a significant problem for pre-trained language models (LMs). One line of research has investigated the temporal inference abilities of LMs (Kober et al., 2019; Vashishtha et al., 2020; Thukral et al., 2021; Chen and Gao, 2022). However, existing datasets and analyses primarily focus on English, and more analysis and datasets are required for other languages, including Japanese. Therefore, it is still unclear to what extent current LMs can perform various types of Figure 1: An illustration of our data annotation process.
INT in the templates means interval. 99K means that the gold label is undetermined, → means that the gold label is *Entailment* and ↛ means that the gold label is Contradiction.
temporal inference across languages. In this paper, we construct JAMP1, which is a Japanese NLI
dataset for temporal inference, and evaluate the generalization capacity of several LMs on our dataset.
Our goal is to construct a temporal inference dataset that precisely assesses the generalization capacities of LMs. Manual annotation is a viable option for achieving this goal, but it does not fully meet our needs based on several limitations described below. Although using crowdsourcing to increase the size of datasets may be cost-effective (Bowman et al., 2015; Williams et al.,
2018), managing biases and artifacts in the resulting data can be challenging (Poliak et al., 2018b; Gururangan et al., 2018). In contrast, datasets manually constructed by experts (Cooper et al., 1996; Kawazoe et al., 2015) may have high quality but are potentially expensive to scale. Additionally, manual dataset construction makes it difficult to control the distribution of vocabulary and inference patterns in a dataset because it heavily relies on the prior knowledge of each annotator (e.g.,
word choice). To address the issues associated with 1Our dataset is available on https://github.com/
tomo-ut/temporalNLI_dataset 57
![1_image_0.png](1_image_0.png)
スミス は バーミンガム に 2 年 住んだ。
Smith wa Birmingham ni **2 year** live . (Smith lived in Birmingham **for two years**.)
スミス は バーミンガム に 住んだ。
Smith wa Birmingham ni live . (Smith lived in Birmingham.)
G Entailment
| 昨日 、 APCOM は 契約書 に 署名した。 yesterday , APCOM wa contract ni sign . (APCOM signed the contract yesterday.) 今日 は 7 月 14 日 土曜日 だ。 today wa 7 month 14 day Saturday da . (Today is Saturday, July 14.) | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| H | APCOM は 13 日 の 金曜日 に 契約書 に 署名した。 APCOM wa 13 day no Friday ni contract ni sign . (APCOM signed the contract on Friday the 13th.) スミス は バーミンガム に 2 年 住んだ。 Smith wa Birmingham ni 2 year live . (Smith lived in Birmingham for two years.) スミス は バーミンガム に 住んだ。 Smith wa Birmingham ni live . (Smith lived in Birmingham.) |
![1_image_1.png](1_image_1.png)
manual annotation, prior work uses template-based approaches that automatically assign diverse vocabulary to templates that are manually created by experts to construct scalable datasets (Richardson et al., 2020; Yanaka and Mineshima, 2021). By using this method, we can strictly manage the vocabulary and inference patterns in a dataset, thus it is a suitable approach for probing LMs.
Figure 1 presents our data annotation process, which consists of two stages: *template creation* and *problem generation*. We first collect Japanese temporal inference examples from JSeM (Kawazoe et al., 2015), which is the Japanese version of FraCaS (Cooper et al., 1996), and manually transform them into templates by masking content words (e.g.,
nouns and verbs) and temporal expressions (e.g.,
date and time), producing 46 tense fragments (i.e.,
temporal inference patterns) based on formal semantics. We then generate examples by assigning content words sampled from a Japanese case frame dictionary (Kawahara and Kurohashi, 2006) and randomly generating temporal expressions to those templates. These techniques ensure that the sentences in JAMP are diverse and cover a wide range of temporal inference patterns. It is important to note that our temporal NLI examples are derived from a diverse set of templates that are classified with tense fragments, allowing us to create different test splits depending on the goal of evaluation, such as generalization across different tense fragments.
We evaluate two Japanese models and one multilingual model on our dataset. We analyze whether they can solve our dataset in a zero-shot setting
(trained on existing Japanese NLI datasets) and a fine-tuning setting (trained on a small subset of our dataset). The experimental results demonstrate that the LMs can generalize across different temporal expressions but fail to generalize some tense fragments such as habituality.
## 2 Background 2.1 Frame
Frame is one of the basic knowledge for language understanding. There are several English resources for frame knowledge, including VerbNet (Schuler, 2005), FrameNet (Baker et al., 1998), and PropBank (Palmer et al., 2005), and previous studies have used these resources to construct datasets (Poliak et al., 2018a; Mitra et al., 2020).
In Japanese, case particles (e.g., が–pronounced ga) are attached to verbal arguments (e.g., subject)
and determine the case frame. A Japanese case frame dictionary (Kawahara and Kurohashi, 2006)
is the largest resource that reflects these characteristics of Japanese language. This case frame dictionary is a set of 110,000 predicates and associated nouns extracted from 10 billion sentences, that are annotated for each predicate usage. Table 2 shows an example of a case frame in the Japanese case frame dictionary.
As shown in Table 2, the case frame dictionary contains information regarding the frequencies of case frames and nouns. In this paper, we use these case frames to generate a dataset containing diverse sentence patterns without grammatical errors.
## 2.2 Fragments
Some existing datasets (Cooper et al., 1996; McCoy et al., 2019; Yanaka and Mineshima, 2021), including JSeM (Kawazoe et al., 2015), define problem categories for each problem for further analysis.
In this study, we systematically defined tense fragments (i.e., temporal inference patterns) based on
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
Table 2: An example of a case frame in the Japanese case frame dictionary.
the categories of temporal inference patterns in JSeM.
Table 1 shows some examples of tense fragments
(see Appendix A for additional tense fragments). In Table 1, "Main Tense Fragment" represent higherlevel classifications, and "Sub-tense Fragment" represent sub-classifications that are subdivided from the main tense fragments. Tense fragments enable a more detailed analysis of LMs' understanding of temporal inference.
## 3 Jamp
In this paper, we present JAMP, which is a Japanese NLI dataset for temporal inference, and propose a method for automatic construction from templates based on tense fragments. Figure 2 shows the pipeline of our method. First, we create a template by masking content words and temporal expressions in existing temporal NLI problems (§3.1).
A template consists of the following triplet: (i) a set of premises in which content words and temporal expressions are masked, (ii) a hypothesis in which content words and temporal expressions are masked, and (iii) a condition for determining a gold label. Here, a gold label can take on three values:
Entailment, *Contradiction*, and *Neutral*. Next, we generate training and test sentences by assigning content words selected from the vocabulary list to the template (§3.2). We create a vocabulary list by using the Japanese case frame dictionary to make
![2_image_2.png](2_image_2.png)
| P: | agent_1 が interval_1 以内に np_1 を vp_1_past。 |
|------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| H: | agent_1 は interval_2 以内に np_1 を vp_1_past。 |
| G: | if interval_1 ≤ interval_2 then Entailment else Neutral エレン が 6 年間 以内 に ゴール を 達成した。 Ellen ga 6 years within ni goal o achieved . (Ellen has achieved her goal within six years.) エレン は 5 年間 以内 に ゴール を 達成した。 Ellen wa 5 years within ni goal o achieved . (Ellen has achieved her goal within five years.) |
sentences more coherent.2 We manually inspect all sentences in the test examples and eliminate any sentences that are unnatural or harmful. We then generate train and test problems by assigning temporal expressions to train and test sentences. Finally, we split the training problems along three axes (e.g., tense fragment, time format, and time span) to create training data for various experimental settings (§3.4). In this section, we describe each of these steps in detail.
## 3.1 Template Creation
In the first step, we construct templates consisting of a set of premises, a hypothesis, and a gold label.
We create templates for temporal problems based on problems in the temporal inference section of JSeM by masking content words such as nouns and verbs (e.g., スミス (Smith), 住んだ (*lived*)), and temporal expressions (e.g., 7 月 14 日 (*July 14*),
2 年 (2 *years*)). Additionally, because the gold label depends on the temporal expression in the sentence, we convert the original gold label into a condition in which the gold label is determined by specifying a temporal expression. Table 3 shows 2We considered a generation method using masked LMs or generative models but did not adopt them in this study because the generation time was too long, and it was difficult to control the vocabulary and not change inference patterns and syntactic structures.
an example of the template. In the example in Table 3, the condition is "if interval_1 ≤ interval_2 then *Entailment* else *Neutral*" and the gold label is determined according to temporal expressions in interval_1 and interval_2.
There can be strong correlations between specific words and labels in examples generated from templates based on certain JSeM problems. Because such correlations could introduce undesired biases into our dataset, we removed these correlations by constructing new challenging templates for some JSeM problems (see Appendix B for examples).
## 3.2 Problem Generation
We generate problems by filling the masks in templates with various nouns, verbs, and temporal expressions and determining the gold label from these temporal expressions. We use the Japanese case frame dictionary as a vocabulary for selecting verbs and nouns (§2.1). In this study, we manually filter about 30 offensive words from verbs whose frequency in the dictionary is greater than 1000 and nouns whose frequency in the dictionary is greater than 100 extracted from the case frame dictionary and use filtered words.
We target two types of temporal expressions in this study: time points (e.g., 8 月 16 日 7 時
(*August* 16, 7:00)) and intervals (e.g., 3 ヶ月 (3 months)). For time points, we use 10 formats combining year/month/day/hour units: Year (Y), Month
(M), Day (D), Hour (H), YM, MD, DH, YMD,
MDH, and YMDH. For intervals, we use four formats: Year, Month, Day, and Hour.
We assign content words and temporal expressions to templates as follows. First, we randomly select a verb with the case in the template from the case frame dictionary. Next, we randomly select nouns that the selected verb can take as its case in the template. Here, we select a noun for a subjective case from a manually created list of common first names (e.g., *Alice* and Bob).
Then, if a temporal expression exists in the original problem corresponding to the template, we generate a new temporal expression as follows and assign it to templates. If the original temporal expression is an interval, we generate an interval by concatenating an integer randomly selected from one to nine according to one of the four formats described above. If the original temporal expression is a time point, we first randomly select a time
![3_image_0.png](3_image_0.png)
point within the range of January 1, 2000, at 0:00 to December 31, 2020, at 24:00. Then, one of the ten formats described above is applied to the selected time point. For example, if the MD format is applied to 0:00 on January 1, 2010, then the generated temporal expression will be "January 1."
Finally, we assign a gold label by evaluating the condition for the gold label in the template. Table 3 shows an example of a template and the problem generated from that template. In Table 3, the condition is "if interval_1 ≤ interval_2 then *Entailment* else *Neutral*." Because the generated temporal expressions for interval_1 and interval_2 are 6年間
(*six years*) and 5年間 (*five years*), respectively, its gold label is *Neutral*. To ensure that the distribution of gold labels is approximately uniform, we generate the same number of problems from each pair of a template and a gold label.
| Unnatural Sentence | Cause |
|----------------------------------------------------------------------------------------------|-------------------------|
| チャーリー が インク を 吸った。 Charlie ga ink o sucked . | Semantically unnatural |
| (Charlie sucked ink.) ウォルター は 性格 に 変わった。 Walter wa characteristic ni changed . | Incomplete sentence |
| (Walter changed in character.) キャロル は 速度 に 生ずるていた。 | Semantically unnatural |
| Carroll wa speed ni arise . | Grammatically unnatural |
| (Carroll arose to speed.) | |
## 3.3 Quality Control 3.3.1 Dataset Artifacts
Previous works have demonstrated that existing datasets are often affected by dataset artifacts and spurious correlations between surface features and gold labels (Jia and Liang, 2017; Gururangan et al.,
2018; Poliak et al., 2018b). We conduct statistical analysis on our dataset following the method outlined by Gardner et al. (2021) to identify tokenlevel artifacts. Our analysis reveals the extent to which certain words are highly correlated with one of three labels (see Appendix D for details).
Our automatic data annotation approach enables us to effectively manage the examples that we generate. We conduct this statistical analysis during the data generation phase and modify vocabulary words and templates to eliminate shortcuts and spurious correlations between certain words and gold labels. As depicted in Figure 3, the majority of words in JAMP do not exhibit spurious correlations with the gold labels, whereas a significant number of words in Temporal NLI (Vashishtha et al., 2020)
correlate with the gold labels.3In JAMP, the word
"いた" 4stands out as an exception, but its impact is relatively low because its score is close to the green line.
## 3.3.2 Dataset Quality
Naturalness We manually check the naturalness of all test examples and filter out disqualified sentences (approx. 40% of all sentences).5 Table 4 shows examples of sentences we remove from the test set and the reasons for their removal.
Semantically unnatural (e.g., the examples at the top and bottom of Table4) refers to sentences that are grammatically correct but may not be plausible. One reason for the generation of such sentences is 3We sample 100k training examples for this statistical analysis.
4This Japanese word has multiple grammatical roles. One is a past stative verb, and another is a past continuous form of a verb.
5We ask 3 graduate students studying NLP/linguistics to judge sentence quality.
that the Japanese case frame dictionary does not describe the correspondence between cases (e.g.,
ヲ格 (accusative) and ニ格 (dative)). The second case, an incomplete sentence, could be generated since the Japanese case frame dictionary does not describe the essential case for predicates. Other examples, such as the third, show verbs conjugated in the wrong form. This is probably because the verb is not included in the dictionary used to conjugate the verb.
Correctness We randomly sample 100 cases from the constructed test data and manually judge their entailment labels. We check whether the judgement is the same as their gold labels. We confirm that the gold labels in all cases were annotated as intended. However, the gold labels for some problems were debatable. For example, in the sentence *I read a book for three hours*, the meaning of *for three hours* can be interpreted as "just three hours," "about three hours," and "at least three hours". The interpretation depends on the speaker and the context. In such cases, their gold labels depend on the reading, but we confirmed that they are correct in at least one of the possible readings.
## 3.4 Split Problems
Our controlled data generation method enables us to split problems into seen problems (i.e., problems included in both test and training data) and unseen problems (i.e., problems included only in test data)
systematically, which is suitable for investigating the generalization capacity of LMs. In this study, we split our training data to analyze whether LMs can generalize various temporal inference patterns learned from training data. We split the training data based on three axes: tense fragment, time format, and time span. Table 5, 6, and 7 show an example of a seen/unseen problem in each split.
## 3.4.1 Tense Fragment-Based Split
Tense fragment refers to the categorization of the problems described in Section 2.2. We define two splits based on the tense fragments: FRAG-MENT_EASY and FRAGMENT_HARD. These splits aim to test whether LMs can learn temporal inference from basic problems and generalize the acquired inference patterns to more challenging problems. Therefore, both FRAGMENT_EASY and FRAGMENT_HARD include only basic problems in the training data and challenging problems in the test data. FRAGMENT_HARD contains a higher pro-
![5_image_2.png](5_image_2.png)
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
TF: Usage of 現在 (now) - Present tense, Gold label: Entailment TF: Usage of 現在 (now) - Past tense, Gold label: Neutral
TF: Order relation - Transitive, Gold label: Entailment TF: Order relation - Transitive + Before/After, Gold label: Entailment
portion of challenging problems and fewer tense fragments in the training data, which is a more difficult setting for models.
We define basic and challenging problems based on the sub-tense fragments in the tense fragment classification. For example, as in the first example in Table 5, suppose a certain tense fragment has sub-tense fragments that are finer than that tense fragment. In this case, the original tense fragment
(Order relation - Transitive) is considered as basic, and the subcategories (Order relation - Transitive
+ Before/After) are considered as challenging. In contrast, as in the second example in Table 5, if there is no such sub-tense fragment, but there are sub-tense fragments with the same granularity as that of the classification, one (Usage of 現在 (now)
- Present tense) is considered as basic, and the other
(Usage of 現在 (now) - Past tense) is considered as challenging.
## 3.4.2 Time Format-Based Split
Time format represents the format of the temporal expression inserted in a problem. In this study, we define ten time formats by combining multiple time units (year, month, day, and hour) for time points and define two splits based on the time formats.
This split aims to test whether LMs can learn the size relationships between time units (year > month
> day > hour) from a minimal number of combinations of units and generalize the acquired inference patterns to apply them to complex combinations.
The first split is FORMAT_HARD, which contains only a single time unit pattern (i.e., patterns involving only year, only month, only day, or only hour)
in a training set and evaluates models on combined patterns of multiple time units.
The other split is FORMAT_EASY, which includes a minimum number of combinations (i.e.,
year-month pattern, month-day pattern, and dayhour pattern) that allow the models to understand the size relationships between time units, as shown in the second example in Table 6. By comparing the accuracy of FORMAT_EASY and FORMAT_HARD,
we can determine whether LMs can learn and generalize the size relationships between time units.
## 3.4.3 Time Span-Based Split
Time span represents the closeness of temporal expressions when multiple temporal expressions appear in a problem. In this study, we define two time spans: SHORT and RANDOM. In SHORT time span problems, the temporal expressions are generated such that the time points included in the problem are close to each other (see Appendix C),
as shown in the unseen problem in Table 7. On the other hand, in RANDOM time span problems, the distance between the time points included in the problem is not predetermined, and the temporal expressions are generated in the same manner as described in Section 3.2. Therefore, the distances between the time points included in a problem are often far apart, as shown in the seen problem in Table 7.
When a model determines the order of two time points, the model must compare the two time points in order, starting with the largest unit. If two time points are far apart, then the model can determine their order by comparing only the larger units, but if two time points are close, then the model must compare additional units to determine their order.
For example, the order of January 1, 2010, at 1:00 and October 10, 2020, at 10:00 can be determined by looking only at the year, but the order of January 1, 2010, at 1:00 and January 1, 2010, at 10:00 P
![6_image_0.png](6_image_0.png)
| Seen problem | Unseen problem |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Format: Year, Gold label: Neutral | Format: Year-Month-Day-Hour, Gold label: Entailment エレン が 2 年間 以内 に 考え を 変えた。 Ellen ga 2 years within ni mind o changed . (Ellen changed her mind within 2 years. ) エレン は 2016 年 11 月 18 日 15 時 に その 考え を 変え 始めた。 Ellen wa 2016 year 11 month 18 day 15 hour ni its mind o change began . (Ellen began to change her mind at 15:00 on November 18, 2016.) |
| パット は 2011 年 まで に その 代価 を 支払い 終えた。 | エレン は 2020 年 10 月 15 日 21 時 まで に その 考え を 変え 終えた。 |
| Pat wa 2011 year until ni its price o pay finished . | Ellen wa 2020 year 10 month 15 day 21 hour until ni its mind wo change finished . |
| (Pat finished paying the price by 2011.) | (Ellen finished changing her mind by 21:00 on October 15, 2020.) |
| Format: Year-Month, Gold label: Entailment | Format: Year-Month-Day-Hour, Gold label: Entailment |
| パット が 6 年間 以内 に 代価 を 支払った。 Pat ga 6 year within ni price o paid . (Pat paid the price within 6 years.) パット は 2009 年 に その 代価 を 支払い 始めた。 Pat wa 2009 year ni its price o pay began . (Pat began paying the price in 2009.) | 2008 年 2 月 27 日 0 時 以来 、 ビクター は ソフトバンク に 移籍している。 2008 year 2 month 27 day 0 hour since , Victor wa Softbank ni transfer . (Since 0:00 on February 27, 2008, Victor has been transferred to Softbank.) 現在 、 2008 年 2 月 27 日 4 時 である。 now , 2008 year 2 month 27 day 4 hour dearu . (It is now 4:00 on February 27, 2008.) |
| ウォルター は 2018 年 9 月 には 閣僚 に 指示していた。 | ビクター は 2008 年 2 月 27 日 1 時 には ソフトバンク に 移籍していた。 |
| Walter wa 2018 year 9 month niwa cabinet ni instruct . | Victor wa 2008 year 2 month 27 day 1 hour niwa Softbank ni transfer . |
| (Walter had instructed the cabinet ministers in September 2018.) | (Victor was transferred to Softbank at 1:00 on February 27, 2008.) |
| 2018 年 8 月 以来 、 ウォルター は 閣僚 に 指示している。 2018 year 8 month since , Walter wa cabinet ni instruct . (Since August 2018, Walter has instructed cabinet members.) 現在 、 2018 年 11 月 である。 now , 2018 year 11 month dearu . (It is now November 2018.) | |
| H P |
|-------|
```
Table 6: Examples of problems that are in the training data (seen problems) and corresponding problems that are
not in the training data (unseen problems) in a time format-based split setting.
Seen problem Unseen problem
Span: Random, Gold label: Neutral Span: Short, Gold label: Contradiction
```
requires comparing the year, month, day, and hour in order. Therefore, we consider that determining the order relationships between close time points is more difficult than determining the order relationships between distant time points.
We define a time span-based split that contains only RANDOM in the training data. This split aims to test whether LMs can learn the order relationships of temporal expressions and generalize the acquired inference patterns to apply them to combinations of temporal expressions that require more difficult evaluation.
| Seen problem | Unseen problem |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Span: Random, Gold label: Neutral | Span: Short, Gold label: Contradiction 2015 年 9 月 11 日 7 時 以来 、 フランク は 細工 に 挑戦している。 2015 year 9 month 11 day 7 hour since , Frank wa craft ni try . (Frank has been trying to craft since 7:00 on September 11, 2015.) 現在、 2015 年 9 月 11 日 10 時 である。 now , 2015 year 9 month 11 day 10 hour dearu . (It is now 10:00 on September 11, 2015.) |
| ウォルター は 2018 年 5 月 15 日 12 時 には 実家 に 泊まっていた。 | フランク は 2015 年 9 月 11 日 5 時 には 細工 に 挑戦していた。 |
| Walter wa 2018 year 5 month 15 day 12 hour niwa parents' house ni stay . | Frank wa 2015 year 9 month 11 day 5 hour niwa craft ni try . |
| (Walter was staying at his parents' house at 12:00 on May 15, 2018.) | (Frank was trying to craft at 5:00 on September 11, 2015.) |
| 2002 年 8 月 16 日 7 時 以来 、 ウォルター は 実家 に 泊まっている。 2018 year 8 month 16 day 7 hour since , Walter wa parents' house ni stay . (Walter has been staying at his parents' house since 7:00 on August 16, 2002.) 現在 、 2013 年 5 月 26 日 3 時 である。 now , 2013 year 5 month 26 day 3 hour dearu . (It is now 3:00 on May 26, 2013.) | |
## 4 Experiments
We evaluate several NLI models on our dataset.
We consider six pre-trained LMs (Japanese BERTbase/large, Japanese RoBERTa-base/large, multilingual XLM-RoBERTa-base/large)6available on huggingface/transformers7in our experiments. We conduct experiments in three settings: zero-shot
(monolingual), zero-shot (cross-lingual), and finetuning. Here, zero-shot means that we do not use our training data but use existing Japanese NLI datasets for training data. The statistics of the datasets used in our experiments are provided in Appendix E.
Zero-shot setting (monolingual) We train the LMs on three concatenated NLI datasets: the standard Japanese NLI datasets JSNLI (automatic translation of the English SNLI dataset (Bowman et al.,
2015)) (Yoshikoshi et al., 2020) and JSICK (manual translation of the English SICK dataset (Marelli et al., 2014)) (Yanaka and Mineshima, 2022), and the Japanese NLI dataset PLMUTE_ja (Sugimoto and Yanaka, 2022), which involves temporal order.
We then evaluate the models on our test data.
Zero-shot setting (cross-lingual) We train the LMs on three concatenated NLI datasets: the standard English NLI dataset SNLI, SICK, and the English NLI dataset PLMUTE (Thukral et al., 2021),
which involves temporal order and duration. We then evaluate the models on our test data.
Fine-tuning setting We train and evaluate the LMs on our training data and test data.
Additionally, in the fine-tuning setting, we train the LMs on the split training data described in Sec-
![7_image_0.png](7_image_0.png)
Zero-shot Fine-tuning SplitTense Fragment Time Format Time Easy Hard Easy Hard ∆ Span baseseen - - .891±0.02 .879±0.01 .812±0.05 .839±0.02 .800±0.02 .039±0.03 .757±0.03 unseen .428±0.02 - - .405±0.04 .379±0.02 .897±0.03 .761±0.04 **.136**±0.05 .662±0.05
∆ - - - .474±0.04 **.433**±0.05 - - - **.095**±0.06 largeseen - - .955±0.01 .969±0.01 .968±0.02 .920±0.02 .922±0.01 -.002±0.02 .912±0.01 unseen .440±0.03 - - .457±0.03 .419±0.01 .970±0.02 .893±0.02 **.077**±0.03 .876±0.04
∆ - - - .512±0.03 **.549**±0.02 - - - **.036**±0.04 baseseen - - .914±0.02 .898±0.03 .851±0.07 .832±0.03 .754±0.08 .078±0.09 .749±0.06 unseen .468±0.03 - - .388±0.02 .318±0.02 .846±0.04 .677±0.12 **.169**±0.13 .669±0.05
∆ - - - .510±0.04 **.533**±0.07 - - - **.080**±0.08 largeseen - - .937±0.03 .970±0.01 .984±0.01 .914±0.03 .907±0.01 .007±0.03 .819±0.13 unseen .460±0.02 - - .445±0.03 .399±0.04 .967±0.02 .884±0.01 **.083**±0.02 .799±0.11
∆ - - - .525±0.03 **.585**±0.04 - - - **.020**±0.17 baseseen - - .768±0.05 .683±0.01 .649±0.02 .690±0.09 .607±0.02 .083±0.09 .553±0.06 unseen - .411±0.03 - .238±0.01 .309±0.02 .678±0.06 .541±0.01 **.137**±0.06 .553±0.06
∆ - - - .445±0.01 **.340**±0.03 - - - **.000**±0.08 largeseen - - .941±0.01 .952±0.02 .955±0.03 .883±0.05 .862±0.06 .021±0.08 .761±0.08 unseen - .488±0.03 - .455±0.04 .383±0.02 .935±0.06 .783±0.08 **.152**±0.10 .735±0.09
∆ - - - .497±0.04 **.572**±0.04 - - - **.026**±0.12
tion 3.4, as well as on all of the training data.
In all experiments, we conduct five trials and calculate the averages and standard deviations of the accuracy of the models. Training details are provided in Appendix F.
## 5 Results And Discussion
Table 8 shows the results of all our experiments.
Overall, monolingual models with larger model sizes tend to perform better. In this section, we describe the results for each setting in detail.
## 5.1 Zero-Shot Setting
The two left columns in Table 8 show the results on the zero-shot setting. As Table 8 shows, the accuracy of both the monolingual and cross-lingual models is approximately 40%, and there is no significant difference between them. One possible reason is that SNLI, SICK, and their Japanese versions (JSNLI and JSICK) do not contain temporal inference, and the temporal inference patterns obtained from PLMUTE are only a fraction of the inference patterns required to solve our test set.
## 5.2 Fine-Tuning Setting
The right side of Table 8 shows the results on the fine-tuning setting. As expected, all models are highly accurate on the IID split setting (i.e., the setting in which all training data were used). We then discuss the results of the experiments using the splits described in Section 3.4.
## Tense Fragment-Based Split In The Tense
fragment-based split, the difference in accuracy between seen and unseen problems was nearly 50% for all models on both FRAGMENT_EASY and FRAGMENT_HARD. This suggests that the models cannot generalize the temporal inferences obtained from the training data.
Table 9 shows an example of unseen problems
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
![7_image_3.png](7_image_3.png)
that RoBERTa-large could not solve on FRAG-MENT_EASY and the corresponding seen problems in the training data. Because all models obtained similar results in relation to the generalization ability of LMs for temporal inference, we focus on the RoBERTa-large model, which achieved the best performance on our dataset. For this example, the model gave the same prediction for the both unseen and seen problems. The other tense fragment problems that the model could not solve on FRAG-MENT_EASY have the same characteristics. Specifically, the model tended to predict incorrect labels for problems in which the premises and hypotheses of seen and unseen problems were very similar (differences are highlighted in bold), but the gold labels were different, as shown in Table 9. This suggests that this model does not capture the essential meaning of a sentence but determines the entailment relations based only on superficial information (i.e., the model does not generalize temporal inference patterns).
## Time Format-Based Split As Shown In Table
8 shows, all models except XLM-RoBERTa-base achieved 80% accuracies on both unseen problems and seen problems of FORMAT_EASY. Furthermore, detailed analysis revealed that the XLMRoBERTa-base did not solve problems that required inference of the size relationships between time units. This indicates that XLM-RoBERTabase only fails to generalize the size relation between time units. One potential reason for this is that this model is cross-lingual and not large. In contrast, on FORMAT_HARD, all models exhibited reduced accuracy for the unseen problems compared to the seen problems. This indicates that the models do not have a priori knowledge regarding the size relationships between time units. There-
Seen problem Unseen problem TF: Habituality - Unmentioned TP + Always
Gold label: NeutralTF: Habituality + Negation - Unmentioned TP + Always
Gold label: Contradiction, Pred label: Neutral
デイヴ は いつも マンション を 遅れて 訪れる。
Dave wa always apartment o late visit . (Dave always visits the apartment late.)
2002 年 5 月 11 日 14 時 に デイヴ は マンション を 訪れた。
2002 year 5 month 11 day 14 hour ni Dave wa apartment o visit .
(Dave visited the apartment on May 11, 2002 at 14:00.)
H
イヴァン は 2011 年 11 月 28 日 22 時 に 図面 を 遅れて 出した。
Ivan wa 2011 year 11 month 28 day 22 hour ni drawing o late submit . (Ivan submitted his drawing late at 22:00 on November 28, 2011.)
デイヴ は 2012 年 2 月 1 日 0 時 に マンション を 遅れ ず に 訪れた。
Dave wa 2012 year 2 month 1 day 0 hour ni apartment o late not ni visit . (Dave visited the apartment on February 1, 2012 at 0:00 **without** delay.)
| Seen problem | Unseen problem |
|------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| TF: Habituality - Unmentioned TP + Always イヴァン は 2011 年 11 月 28 日 22 時 に 図面 を 遅れて 出した。 | |
| H | Ivan wa 2011 year 11 month 28 day 22 hour ni drawing o late submit . (Ivan submitted his drawing late at 22:00 on November 28, 2011.) イヴァン は いつも 図面 を 遅れて 出す。 Ivan wa always drawing o late submit . (Ivan always submits his drawing late. ) 2011 年 11 月 28 日 16 時 に イヴァン は 図面 を 出した。 2011 year 11 month 28 day 16 hour ni Ivan wa drawing o submit . (Ivan submitted his drawing at 16:00 on November 28, 2011.) |
| P | |
fore, we consider that on FORMAT_EASY, BERT
and RoBERTa succeeded in generalizing the inference patterns of the size relationships between time units based on minimal combinations of time units in the training data.
Time Span-based Split On the time span-based split, the large models achieved comparable accuracy on both the seen and unseen problems, whereas the base models tended to exhibit lower accuracy on the unseen problems. This suggests that the large models can generalize methods for determining the order relationships between time points, but the base models cannot generalize.
## 6 Conclusion
In this study, we constructed JAMP, a temporal Japanese NLI dataset, using a template-based approach. Our dataset is controllable in terms of difficulty, vocabulary, and size based on this approach.
We conducted experiments using our dataset to probe the generalization ability of pre-trained language models for temporal inference. The experimental results indicated that current LMs can generalize for time format splits and time span splits but fail to generalize for tense fragment splits. Our dataset demonstrates that there is room for improvement in the generalization ability of current standard LMs for temporal inference. Because our method is applicable to the construction of datasets for other linguistic phenomena (e.g., modality, comparative), we plan to investigate the generalization ability of language models for other phenomena using the template-based approach in the future.
## 7 Limitations
In this section, we discuss two limitations of this study. The first limitation is that aspect and temporal commonsense are outside the scope of our dataset. Here, temporal commonsense refers to knowledge regarding events and the appropriate duration of those events. For example, the event
"I washed my face for three years" is unnatural in terms of temporal commonsense, but this study did not consider such unnaturalness.
The second limitation is that the proposed method is currently applicable only to Japanese.
In this study, we used a Japanese case frame dictionary to generate natural sentences. However, other languages such as English do not have resources equivalent to such a dictionary. Therefore, to apply our method to additional languages, we must first prepare a case frame dictionary for each language.
## Acknowledgements
We thank the two anonymous reviewers for their helpful comments and suggestions, which improved this paper. This work was supported by JST, PRESTO grant number JPMJPR21C8, Japan.
## References
Collin F. Baker, Charles J. Fillmore, and John B. Lowe.
1998. The Berkeley FrameNet project. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 86–90, Montreal, Quebec, Canada. Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Zeming Chen and Qiyue Gao. 2022. Curriculum: A
broad-coverage benchmark for linguistic phenomena in natural language understanding. In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 3204–3219, Seattle, United States. Association for Computational Linguistics.
Robin Cooper, Richard Crouch, Jan van Eijck, Chris Fox, Josef van Genabith, Jan Jaspars, Hans Kamp, Manfred Pinkal, David Milward, Massimo Poesio, Stephen Pulman, Ted Briscoe, Holger Maier, and Karsten Konrad. 1996. Using the framework. Technical Report LRE 62-051r, The FraCaS Consortium.
Matt Gardner, William Merrill, Jesse Dodge, Matthew Peters, Alexis Ross, Sameer Singh, and Noah A.
Smith. 2021. Competency problems: On finding and removing artifacts in language data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1801–1813, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith.
2018. Annotation artifacts in natural language inference data. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics.
Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems.
In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics.
Daisuke Kawahara and Sadao Kurohashi. 2006. A fullylexicalized probabilistic model for Japanese syntactic and case structure analysis. In *Proceedings of the Human Language Technology Conference of the NAACL,*
Main Conference, pages 176–183, New York City, USA. Association for Computational Linguistics.
Ai Kawazoe, Ribeka Tanaka, Koji Mineshima, , and Daisuke Bekki. 2015. An inference Problem Set for Evaluating Semantic Theories and Semantic Processing systems for Japanese. In *JSAI International Symposium on Artificial Intelligence*, volume Technical report LRE 62-051r, FraCaS Consortium. Springer.
Thomas Kober, Sander Bijl de Vroe, and Mark Steedman. 2019. Temporal and aspectual entailment. In Proceedings of the 13th International Conference on Computational Semantics - Long Papers, pages 103–119, Gothenburg, Sweden. Association for Computational Linguistics.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14),
pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA).
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of
the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics.
Arindam Mitra, Ishan Shrivastava, and Chitta Baral.
2020. Enhancing natural language inference using new and expanded training data sets and new learning models. *Proceedings of the AAAI Conference on* Artificial Intelligence, 34(05):8504–8511.
Hajime Morita, Daisuke Kawahara, and Sadao Kurohashi. 2015. Morphological analysis for unsegmented languages using recurrent neural network language model. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language* Processing, pages 2292–2297, Lisbon, Portugal. Association for Computational Linguistics.
Martha Palmer, Daniel Gildea, and Paul Kingsbury.
2005. The Proposition Bank: An annotated corpus of semantic roles. *Computational Linguistics*, 31(1):71–
106.
Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018a. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 67–81, Brussels, Belgium.
Association for Computational Linguistics.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018b.
Hypothesis only baselines in natural language inference. In *Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics*, pages 180–191, New Orleans, Louisiana. Association for Computational Linguistics.
Kyle Richardson, Hai Hu, Lawrence Moss, and Ashish Sabharwal. 2020. Probing natural language inference models through semantic fragments. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8713–8721.
Karin Kipper Schuler. 2005. Verbnet: A broadcoverage, comprehensive verb lexicon. *Ph. D. Thesis,*
University of Pennsylvania.
Tomoki Sugimoto and Hitomi Yanaka. 2022. Compositional semantics and inference system for temporal order based on Japanese CCG. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 104–114, Dublin, Ireland. Association for Computational Linguistics.
Shivin Thukral, Kunal Kukreja, and Christian Kavouras.
2021. Probing language models for understanding of temporal expressions. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 396–406, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Siddharth Vashishtha, Adam Poliak, Yash Kumar Lal, Benjamin Van Durme, and Aaron Steven White. 2020.
Temporal reasoning in natural language inference.
In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4070–4078, Online.
Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Hitomi Yanaka and Koji Mineshima. 2021. Assessing the generalization capacity of pre-trained language models through Japanese adversarial natural language inference. In *Proceedings of the Fourth* BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 337–349, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hitomi Yanaka and Koji Mineshima. 2022. Compositional evaluation on Japanese textual entailment and similarity. Transactions of the Association for Computational Linguistics, 10:1266–1284.
Takumi Yoshikoshi, Daisuke Kawahara, and Sadao Kurohashi. 2020. Multilingualization of a natural language inference dataset using machine translation
(in japanese). In *Proceedings of the 244th Meeting* of Natural Language Processing.
## A Tense Fragment B Problem Creation For Some Jsem Problems C Temporal Expression Generation In Short **Time Span** D Details For Dataset Artifacts Analysis
| Tense Fragment | Sub-tense Fragment |
|----------------------|---------------------------------------------------------------------------------------------------|
| Temporal commonsense | Usage of 現在 (now) |
| Temporal ordering | Continuity of state Ordering relation |
| Time point | Mentioned time point Unmentioned time point |
| Temporal anaphora | Reference resolution of 昨日 (yesterday) |
| Interval | Comparison of two intervals Completion of eventuality Mentioned time point Unmentioned time point |
| Habituality | Negation Existential quantification |
Table 10 shows the tense fragments we defined.
Table 10: Tense fragments we introduced in this study.
Table 11 shows examples of created problems and corresponding original problems in JSeM. As shown in Table 11, original and new problems are similar but have different gold labels. We also create templates for these created problems.
The temporal expressions in SHORT are generated as follows. In the case of generating intervals, they are generated as described in Section 3.2, except that the integer selection range is one to three instead of one to nine. In the case of generating time points, we first identify the next largest unit after the smallest unit of the time format in the current problem and then calculate the duration of onethird of that unit. We then determine a selection range from a randomly selected time point to a time point that is advanced by the calculated duration.
For example, if the smallest unit is "hour," then the next smallest unit is "day," so the selection range is between a specific time point and another time point one-third of a day (eight hours) in the future.
As mentioned in Section ??, dataset artifacts analysis reveals correlations between labels and specific words. Formally, this analysis is a one-side binomial hypothesis test with the null hypothesis p(y|xi) = 1/3, where y ∈
{Entailment, Neutral, *Contradiction*}, and xiis a 67
| Original problem | New problem | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|
| Gold label: Entailment | Gold label: Contradiction スミス は ジョーンズ が 去る 前 に 去った。 Smith wa Jones ga leave before ni leave . (Smith left before Jones left.) ジョーンズ は アンダーソン が 去る 前 に 去った。 Jones wa Anderson ga leave before ni leave . (Jones left before Anderson left.) | |
| スミス は アンダーソン が 去る 前 に 去った。 | スミス は アンダーソン が 去った 後 に 去った。 | |
| H | Smith wa Anderson ga leave before ni leave . | Smith wa Anderson ga leave after ni leave . |
| (Smith left before Anderson left.) | (Smith left after Anderson left.) | |
| Gold label: Neutral | Gold label: Entailment | |
| スミス が 2 時間 以内 に 報告書 を 書いた。 | スミス が 2 時間 で 報告書 を 書いた。 | |
| P | Smith ga 2 hour within ni report o write . | Smith ga 2 hour de report o write . |
| (Smith wrote a report within two hours.) | (Smith wrote a report in two hours.) | |
| スミス は その 報告書 を 書く の に 2 時間 を 費やした。 | スミス は その 報告書 を 書く の に 2 時間 を 費やした。 | |
| H | Smith wa that report o write no ni 2 hour o spent . | Smith wa that report o write no ni 2 hour o spent . |
| (Smith spent two hours writing that report.) | (Smith spent two hours writing that report.) | |
| スミス は ジョーンズ が 去る 前 に 去った。 Smith wa Jones ga leave before ni leave . (Smith left before Jones left.) ジョーンズ は アンダーソン が 去る 前 に 去った。 Jones wa Anderson ga leave before ni leave . (Jones left before Anderson left.) | | |
| P | | |
Table 11: Examples of created problems and corresponding original problems in JSeM.
| Section | Size |
|---------------------|--------|
| Train | 9,750 |
| (3,050/3,340/3,360) | |
| Test | 344 |
| (114/112/118) | |
Table 12: JAMP dataset statistics. The lower row in parentheses shows the number of entailment, contradiction, and neutral examples, respectively.
| Dataset Name | Size |
|---------------------------------------|---------|
| SNLI (Bowman et al., 2015) | 550,152 |
| SICK (Marelli et al., 2014) | 9,840 |
| PLMUTE (Thukral et al., 2021) | 72,720 |
| JSNLI (Yoshikoshi et al., 2020) | 533,005 |
| JSICK (Yanaka and Mineshima, 2022) | 5,000 |
| PLMUTE_ja (Sugimoto and Yanaka, 2022) | 11,220 |
Table 13: Statistics of dataset used in our experiments word included in the vocabulary. For this analysis, we first split the hypothesis and premise sentences into individual words/tokens using Juman++ (Morita et al., 2015). We then count the number of occurrences of the gold label y in the ni examples for every word xi present in those examples. p(y|xi) is estimated based on the fraction of the count of the gold label y over ni. According to the protocol described in Gardner et al. (2021),
the null hypothesis is either accepted or rejected with a significance level of α = 0.01 based on the Bonferroni correction.
## E Data Statistics
Table 12 shows JAMP dataset statistics. Table 13 shows sizes of datasets used in our experiments.
## F Training Details
We select the best learning rate among [6e-6,8e6,1e-5,1.2e-5,2e-5] based on the development set.
We use a batch size of 16 for training and eight for test.
## G Data Licensing
Japanese case frame dictionary is distributed by Gengo-Shigen-Kyokai. JSeM is licensed under by BSD-3-Clause license. Our use of these two datasets is consistent with the terms of the license. |
sekizawa-etal-2023-constructing | Constructing Multilingual Code Search Dataset Using Neural Machine Translation | https://aclanthology.org/2023.acl-srw.10 | Code search is a task to find programming codes that semantically match the given natural language queries. Even though some of the existing datasets for this task are multilingual on the programming language side, their query data are only in English. In this research, we create a multilingual code search dataset in four natural and four programming languages using a neural machine translation model. Using our dataset, we pre-train and fine-tune the Transformer-based models and then evaluate them on multiple code search test sets. Our results show that the model pre-trained with all natural and programming language data has performed best in most cases. By applying back-translation data filtering to our dataset, we demonstrate that the translation quality affects the model{'}s performance to a certain extent, but the data size matters more. | # Constructing Multilingual Code Search Dataset Using Neural Machine Translation
Ryo Sekizawa1 Nan Duan2 Shuai Lu2 **Hitomi Yanaka**1 1The University of Tokyo 2Microsoft Research Asia
{ryosekizawa,hyanaka}@is.s.u-tokyo.ac.jp
{nanduan,shuailu}@microsoft.com
## Abstract
Code search is a task to find programming codes that semantically match the given natural language queries. Even though some of the existing datasets for this task are multilingual on the programming language side, their query data are only in English. In this research, we create a multilingual code search dataset in four natural and four programming languages using a neural machine translation model. Using our dataset, we pre-train and fine-tune the Transformer-based models and then evaluate them on multiple code search test sets. Our results show that the model pre-trained with all natural and programming language data has performed best in most cases. By applying back-translation data filtering to our dataset, we demonstrate that the translation quality affects the model's performance to a certain extent, but the data size matters more.
## 1 Introduction
Code search is the task of finding a semantically corresponding programming language code given a natural language query by calculating their similarity. With the spread of large-scale code-sharing repositories and the rise of advanced search engines, high-performance code search is an important technology to assist software developers. Since software developers worldwide search for codes in their native language, we expect code search models to be multilingual. Although many previous studies focus on multilingual code tasks other than code search (e.g., code generation, code explanation) (Wang et al., 2021; Ahmad et al., 2021; Fried et al., 2023; Zheng et al., 2023), the existing code search datasets (Husain et al., 2020; Huang et al.,
2021; Shuai et al., 2021) contain only monolingual data for search queries.
In this research, we construct a new multilingual code search dataset by translating natural language data of the existing large-scale dataset using a neural machine translation model. We also use our dataset to pre-train and fine-tune the Transformer (Vaswani et al., 2017)-based model and evaluate it on multilingual code search test sets we create. We show that the model pretrained with all natural and programming language data performs best under almost all settings. We also analyze the relationship between the dataset's translation quality and the model's performance by filtering the fine-tuning dataset using backtranslation. Our model and dataset will be publicly available at https://github.com/ynklab/
XCodeSearchNet. The contributions of this research are as follows:
1. Constructing the large code search dataset consisting of multilingual natural language queries and codes using machine translation.
2. Constructing the multilingual code search model and evaluating it on a code search task using our dataset.
3. Analyzing the correlation between translation quality and the model performance on a code search task.
## 2 Background 2.1 Code Search Dataset
CodeSearchNet Corpus1(CSN; Husain et al., 2020)
is a set of code data (**code**) in six programming languages: Go, Python, Java, PHP, Ruby, and Javascript, and natural language data describing them (**docstring**). CSN is created by automatically collecting pairs of function code and its documentation that are publicly available on GitHub and permitted for redistribution. This corpus contains approximately 2.3 million data pairs and 4 million code-only data. The natural language data in CSN is function documentation, which is pseudo data of the texts humans use to search for codes.
1https://github.com/github/CodeSearchNet 69
| Pre-training (MLM) | Fine-tuning | |
|----------------------|---------------|-----------|
| PHP | 662,907 | 1,047,406 |
| Java | 500,754 | 908,886 |
| Python | 458,219 | 824,342 |
| Go | 319,256 | 635,652 |
| JavaScript | 143,252 | 247,773 |
| Ruby | 52,905 | 97,580 |
In contrast, several datasets are created based on natural language queries used for code search by humans. CodeXGLUE (Shuai et al., 2021), a benchmark for various code understanding tasks, includes two code search datasets: WebQueryTest
(WQT) and CoSQA (Huang et al., 2021). The query data of these datasets are collected from the users' search logs of Microsoft Bing and the code from CSN. Given these separately collected data, annotators who have programming knowledge manually map the corresponding query and code to construct the dataset. The common feature of these datasets is that all natural language data, such as docstrings and queries, are limited to English and do not support multiple languages.
## 2.2 Codebert
CodeBERT (Feng et al., 2020) is a model pretrained and fine-tuned with CSN and is based on the RoBERTa (Liu et al., 2019)'s architecture. CodeBERT uses Masked Language Modeling (MLM;
Devlin et al., 2019; Lample and Conneau, 2019)
and Replaced Token Detection (RTD; Clark et al.,
2020) as pre-training tasks. Both docstring and code data in CSN are used in MLM, while only code data are used in RTD. CodeBERT is trained only with English data, thus not available for a code search task with multilingual queries.
## 3 Dataset Construction Using Machine Translation
A possible way to construct a code search dataset for multiple languages is to translate an existing monolingual dataset. However, CSN's large data size makes manually translating all of its docstrings difficult. Table 1 shows the number of CSN data pairs used for pre-training (MLM) and fine-tuning the CodeBERT.
Therefore, we use a machine translation model to translate the English-only data to generate mul-
| Pre-training | Fine-tuning | Test | | |
|-----------------------------|----------------------------------------------|---------------|---------------|-------|
| Train | Valid | Test | Train | Valid |
| Go | 316,058 3,198 28,533 | 635,652 | 28,482 14,277 | |
| Python 453,623 4,596 45,283 | 824,341 | 46,212 22,092 | | |
| Java | 495,768 4,986 42,237 | 908,885 | 30,654 26,646 | |
| PHP | 656,277 6,630 54,406 1,047,403 52,028 28,189 | | | |
tilingual data efficiently. By translating CSN docstrings, we create a multilingual dataset consisting of four natural languages (English, French, Japanese, and Chinese) and four programming languages (Go, Python, Java, and PHP). We also translate the queries in the datasets Feng et al. (2020)
used for fine-tuning and evaluating CodeBERT for our experiments in Section 4.1 and Section 4.2.
In their fine-tuning data, the numbers of positive and negative labels are balanced. Note that we do not use JavaScript and Ruby data, whose sizes are much smaller than those of other programming languages.
As a translation model, we use M2M-100 (Fan et al., 2022), which supports translations in 100 languages.2 M2M-100 achieved high accuracy in translations of low-resource languages by classifying 100 languages into 14 word families and creating bilingual training data within those families.
We use m2m_100_1.2B model, which is provided by EasyNMT3, a public framework of machine translation models. We set the model's beam size to 3.
We manually annotate the labels to some data of our fine-tuning dataset to check the correlation with the original labels, which is found to be 0.911 (see Appendix B for the details).
## 4 Baseline Experiments
We conduct baseline experiments, where we train the Transformer-based model with our multilingual dataset under various settings of the data sizes and evaluate it on multiple code search test sets.
## 4.1 Training
We perform pre-training and fine-tuning on a model initialized with the XLM-R (Conneau et al., 2019)
architecture and parameters. XLM-R is a model 2We compared the translation results of some docstrings by several translation models, including Opus-MT and mBART, and chose M2M-100, which achieved the best performance.
3https://github.com/UKPLab/EasyNMT
No-pre-training
EN .813 .801 .737 .759 **.526** .334
FR .780 .708 .681 .691 **.463** .302
JA .792 .686 .641 .657 .372 .311
ZH .772 .660 .633 .670 .337 .297
Go Python Java PHP Python Python
All-to-One
EN .824 **.851** .763 .790 .494 **.360**
FR .798 **.796 .733** .734 .432 **.363**
JA .805 **.781** .700 .711 **.460** .348 ZH .788 **.759** .712 .731 **.427 .359**
All-to-All
EN **.835** .848 **.786 .809** .473 .351
FR **.808** .788 .731 **.759** .420 .346 JA **.816** .778 **.719 .730** .436 **.364**
ZH **.804 .759 .750 .745** .418 **.359**
CSN CoSQA WQT
pre-trained by MLM with the Wikipedia and Common Crawl corpora for 100 languages using Transformer (Vaswani et al., 2017) and achieved high performance on multilingual tasks, such as question answering. Note that we use the term "pretraining" to refer to further training of XLM-R
with our dataset. In this paper, we use MLM as the learning objective to pre-train XLM-R and then fine-tune it using data pairs whose query and code languages are monolingual. We use monolingual data pairs for fine-tuning instead of a multilingual combination, given that Feng et al. (2020) clarifies that fine-tuning CodeBERT with six programming languages altogether "performs worse than fine-tuning a language-specific model for each programming language." Query and code data are concatenated to be input to the model, and it predicts their similarity based on the vector representation of the output [CLS] tokens. See Appendix C for more details on training settings, including hyper-
## Parameters. 4.2 Evaluation
| Go | Python | Java | PHP | |
|------------------|----------|--------|-------|------|
| RoBERTa | .820 | .809 | .666 | .658 |
| CODEONLY, INIT=S | .793 | .786 | .657 | .617 |
| CODEONLY, INIT=R | .819 | .844 | .721 | .671 |
| MLM, INIT=S | .830 | .826 | .714 | .656 |
| MLM, INIT=R | .838 | .865 | .748 | .689 |
| RTD, INIT=R | .829 | .826 | .715 | .677 |
| MLM+RTD, INIT=R | .840 | .869 | .748 | .706 |
As with Feng et al. (2020), we use Mean Reciprocal Rank (MRR) as an evaluation metric.
$$\mathrm{MRR}={\frac{1}{|Q|}}\sum_{i=1}^{|Q|}{\frac{1}{\operatorname{rank}_{i}}}$$
|Q| refers to the total number of queries. When a test set has 1,000 data pairs, given a natural language queryi, the model calculates the similarity with the corresponding codei and the 999 distractor codes. If the similarity score given for codeiis the 2nd highest among 1,000 codes, ranki equals 2.
Then, the average of the inverse of ranki over all queries and codes is calculated as MRR.
Table 2 shows the sizes of CSN we use in our experiments. Each test set of CSN for MRR evaluation contains 1,000 data pairs randomly sampled from the original test sets. We use CoSQA and WQT as test sets in addition to CSN. As well as CSN, we create CoSQA test sets from the original 20,604 data pairs. We compute the average of MRR scores over three different test sets for CSN
and CoSQA. The original WQT test set has 422 data pairs, so we use it as-is without sampling data like CoSQA.
We translate natural language queries in these test sets using the same machine translation model and parameter settings as the translation of the training data.
## 4.3 Model Settings
We prepare three model settings that differ in the amount and pattern of training data.
No-pre-training An XLM-R model with no further training applied and its initial parameters used.
All-to-One A model that uses data pairs of multilingual queries and monolingual codes for pretraining. The size of pre-training data ranges from 1.2 million to 2.7 million, depending on programming languages.
All-to-All A model that uses data pairs of multilingual queries and multilingual codes for pretraining. The size of pre-training data is over 7.6 million.
## 4.4 Results
Table 3 shows the scores of the MRR evaluation under all settings. The scores with CSN showed that All-to-All performed best in Go, Java, and PHP in almost all natural languages. On the other hand, All-to-One showed better scores than All-toAll on the Python test set. It is possible that the performance reached the top at All-to-One on the Python test set, given that the difference in scores between All-to-One and All-to-All was relatively small (<0.1). On CoSQA and WQT, there were also cases where model settings other than All-toAll performed better.
The performance of the original CodeBERT on a code search task is shown in Table 4. Overall, Allto-All is on par with the performance of CodeBERT
in English data. Especially, All-to-All marks better scores in Java and PHP than CodeBERT. Note that our experiments and those of CodeBERT differ in the number of test sets used. Thus, it is difficult to compare these scores directly to discuss the model's superiority.
We observed a gradual trend that the scores decreased in English and French and increased in Japanese and Chinese as we increased the size of the pre-training data. This phenomenon might be due to the difference in knowledge of these languages acquired during pre-training XLM-R. The XLM-R pre-training data contain approximately 350 GiB for English and French and approximately 69 GiB and 46 GiB for Japanese and Chinese, respectively. As parameters of XLM-R were updated during our pre-training, the knowledge of English and French the model originally had was lost. On the other hand, the scores of Japanese and Chinese, in which the model owned a small amount of data, were improved by increasing the data size.
![3_image_0.png](3_image_0.png)
| FR 621,167 613,893 597,092 570,891 530,485 391,897 JA 612,422 594,477 552,979 480,567 388,189 250,028 ZH 607,468 588,808 557,748 500,622 410,369 265,986 Valid 0.2 0.3 0.4 0.5 0.6 0.7 FR 27,881 27,535 26,799 25,621 24,000 20,231 JA 27,433 26,524 24,901 21,981 16,327 10,304 ZH 27,115 26,178 24,971 22,280 18,445 10,792 |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Table 5: The sizes of our dataset for fine-tuning after back-translation filtering applied.
| 0 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | |
|-----|-------|-------|-------|-------|-------|-------|------|
| EN | .835 | N/A | N/A | N/A | N/A | N/A | N/A |
| FR | .808 | .810 | .808 | .805 | .811 | .809 | .807 |
| JA | .816 | .805 | .803 | .817 | .813 | .813 | .802 |
| ZH | .804 | .818 | .818 | .807 | .798 | .802 | .802 |
Table 6: MRR scores with back translation filtering for fine-tuning data. 0 means no filtering applied.
## 5 Analysis On Translation Quality 5.1 Back-Translation Filtering
The translation quality of our dataset must affect the model's task performance. Therefore, we investigate whether there is a difference in the scores of the code search task when we filter out the lowquality data from the fine-tuning dataset.
We apply a back-translation filtering method based on previous studies that used machine translation to automatically build a high-quality multilingual dataset from the English one (Sobrevilla Cabezudo et al., 2019; Dou et al., 2020; Yoshikoshi et al., 2020). We first apply backtranslation to French, Japanese, and Chinese docstrings. Then we calculate the uni-gram BLEU (Papineni et al., 2002) score between the backtranslated docstrings and the original English ones and collect only data with scores higher than certain thresholds. In our experiments, we conduct filtering to the fine-tuning dataset of Go. Table 5 shows the data sizes after back-translation filtering. We set thresholds to 0.2 to 0.7 in increments of 0.1 and compare the model's performance with each threshold. We choose these values because the sizes of the datasets change relatively hugely when filtered with the threshold 0.3 to 0.6 (Appendix D).
## 5.2 Results
Table 6 shows the MRR scores of the models whose fine-tuning data are filtered with different thresholds. In every language, the scores peak when we set the threshold between 0.2 to 0.5 and then drop with larger thresholds up to 0.7. This result implies that the filtering successfully removes the low-quality data while maintaining the number of training data and leads to better MRR scores. We assume that the change in size from the original dataset becomes more prominent with thresholds from 0.5 to 0.7 (around 100K-400K), thus eventually resulting in lowering the overall scores.
However, the score changes seem insignificant
(±0.02) among these thresholds. One possible reason is that the data size remains over 250K even after filtering, which should already be enough for fine-tuning in general.
In summary, the results show that filtering out some low-quality data improves the model's performance on the code search task, but removing over 150K data worsens the test scores.
## 6 Conclusion
We created a large multilingual code search dataset by a neural machine translation model. We then constructed a multilingual code search model using our dataset. We found out that the models pre-trained with all of the multilingual natural language and programming language data achieved the best performance on a code search task almost all the time. We also investigated the relationship between the translation quality of our dataset and the model's performance. The results indicated that the data size contributed more to the model's code search performance than the data translation quality.
Overall, this research introduced that using a publicly available machine translation model helps to translate texts in the programming domain. We can apply our method to extend datasets for languages other than French, Japanese, and Chinese to construct models for various natural languages.
## Limitations
We used XLM-R for the baseline model to train with our dataset in our experiments because we wanted to make experimental settings as close as the previous study of CodeBERT but for multilingual data. Since CodeBERT is based on RoBERTa, we chose XLM-R, which is also RoBERTa-based and already trained with multilingual data.
## Acknowledgements
We thank the two anonymous reviewers for their helpful comments and suggestions, which improved this paper. This research is supported by JSPS KAKENHI Grant Number JP20K19868 and partially by Microsoft Research Asia (Collaborative Research Sponsorship).
## References
Wasi Ahmad et al. 2021. Unified Pre-training for Program Understanding and Generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2655–2668, Online. Association for Computational Linguistics.
Kevin Clark et al. 2020. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
In *International Conference on Learning Representations*.
Alexis Conneau et al. 2019. Unsupervised cross-lingual representation learning at scale. *arXiv preprint* arXiv:1911.02116.
Jacob Devlin et al. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–
4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Zi-Yi Dou, Antonios Anastasopoulos, and Graham Neubig. 2020. Dynamic data selection and weighting for iterative back-translation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5894–5904, Online. Association for Computational Linguistics.
Angela Fan et al. 2022. Beyond english-centric multilingual machine translation. The Journal of Machine Learning Research, 22(1):107:4839–107:4886.
Zhangyin Feng et al. 2020. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536–1547, Online.
Association for Computational Linguistics.
Daniel Fried et al. 2023. InCoder: A Generative Model for Code Infilling and Synthesis. In *The Eleventh International Conference on Learning Representations*.
Junjie Huang et al. 2021. CoSQA: 20,000+ Web Queries for Code Search and Question Answering.
In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5690–
5700. Association for Computational Linguistics.
Hamel Husain et al. 2020. CodeSearchNet Challenge:
Evaluating the State of Semantic Code Search. *arXiv* preprint arXiv:1909.09436.
Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291.
Yinhan Liu et al. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.
Kishore Papineni et al. 2002. BLEU: A method for automatic evaluation of machine translation. In *Proceedings of the 40th Annual Meeting on Association for* Computational Linguistics, ACL '02, page 311–318, USA. Association for Computational Linguistics.
Lu Shuai et al. 2021. CodeXGLUE: A machine learning benchmark dataset for code understanding and generation. *arXiv preprint arXiv:2102.04664*.
Marco Antonio Sobrevilla Cabezudo, Simon Mille, and Thiago Pardo. 2019. Back-translation as strategy to tackle the lack of corpus in natural language generation from semantic representations. In *Proceedings* of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), pages 94–103, Hong Kong, China.
Association for Computational Linguistics.
Ashish Vaswani et al. 2017. Attention is All you Need.
In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Yue Wang et al. 2021. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696–8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Takumi Yoshikoshi et al. 2020. Multilingualization of a natural language inference dataset using machine translation. *The 244th meeting of IPSJ Natural Language Processing*, 2020(6):1–8.
Qinkai Zheng et al. 2023. CodeGeeX: A PreTrained Model for Code Generation with Multilingual Evaluations on HumanEval-X. *arXiv preprint* arXiv:2303.17568.
## A Codesearchnet
Table 1 shows the size of CSN for each programming language used for pre-training CodeBERT
with MLM and fine-tuning on the code search task.
The number of data for fine-tuning in Go is listed as 635,635 in Feng et al. (2020), but the dataset publicly provided contains 635,652 data.
## B Dataset Translation
We manually evaluate the translation quality of our dataset. Table 7 shows examples of translation of query data from English to Japanese using M2M100. Since queries of CSN are based on source code descriptions, some of them contain strings that do not necessarily need to be translated, such as variable names, function names, and technical terms
(e.g., SetStatus, retrieveCoinSupply). M2M100 successfully translates the entire sentence, leaving such domain-specific strings as needed.
On the other hand, we observe some errors, such as translating to unknown words (e.g., "alphanumeric" to "アルファナウマリ") or omitting some texts from the translation.
We also manually annotate the labels of 45 sampled data pairs from the fine-tuning dataset of Japanese queries and Go codes and calculate how much they match the original labels. These 45 data pairs do not contain queries that were not successfully translated and remain in English. Among 45 data pairs, 28 of them have "1" as their labels and 17 for "0". We calculate the correlation with accuracy, and the score is 0.911.
## C Training Settings
As hyperparameters for pre-training the model, we set the batch size to 64, the maximum input length to 256, and the learning rate to 2e-4. As hyperparameters for the fine-tuning of the model, we set the batch size to 16, the learning rate to 1e-5, and the number of max training epochs to 3. In both cases, we use Adam as the optimizer.
## D Back-Translation Filtering
Table 8 shows an example of the removed data by filtering. Table 9 shows the data size of each filtering threshold.
| Original (EN) | Translated (JA) | Quality |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|
| SetStatus sets the Status field s value . | SetStatus は、Status フィールドの値を設定します。 | ✓ |
| data from the vins table . | retrieveCoinSupply は、vins テーブルから | |
| retrieveCoinSupply fetches the coin supply | コイン供給データを取得します。 | ✓ |
| stateIdent scans an alphanumeric or field . | stateIdent は、アルファナウマリまたは | ✗ |
| フィールドをスキャンします。 | Unknown word | |
| VisitFrom calls the do function starting from the first neighbor w for which w ≥ a with c equal to the cost of the edge from v to w . The neighbors are then visited in increasing numerical order . If do returns true VisitFrom returns immediately skipping any remaining neighbors and returns true . | VisitFrom は、最初の隣人 w から始まる do 関数を 呼び出し、その w ≥ a と c は v から w までの エッジのコストに等しい。 If do returns true VisitFrom returns immediately skipping any remaining neighbors and returns true. もしそうであれば、VisitFromは直ちに 残りの隣人を無視して true を返します。 | ✗ |
| Wrong translation / Omission | | |
| Original (EN) | Translated (JA) | Back-translated (EN) |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| NoError asserts that a function returned no error ( i . e . nil ) . actualObj err : = SomeFunction () if a . NoError ( err ) { assert . Equal ( t actualObj expectedObj ) } Returns whether the assertion was successful ( true ) or not ( false ) . | NoError は、関数がエラーを返しません ( i. e. nil ) を主張します。 まあ、あれ? まあ、あれ? まあ、あれ? まあ、あれ? まあ、あれ? まあ、あれ? 真実(真実)か否かを返す。 | NoError claims that the function does not return an error (i.e. nil). Oh well that? Oh well that? Oh well that? Oh well that? Oh well that? It is the truth or the truth. |
| The original query contains a code-like sequence (bold texts), so the model could not successfully translate it (underline texts). | | |
Table 8: An example of filtered-out query data (Japanese, Go, threshold=0.4).
| Train | | | | | | | | | |
|---------|---------|---------|---------|---------|---------|---------|---------|---------|--------|
| 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | |
| FR | 626,130 | 621,167 | 613,893 | 597,092 | 570,891 | 530,485 | 391,897 | 224,928 | 78,989 |
| JA | 621,857 | 612,422 | 594,477 | 552,979 | 480,567 | 388,189 | 250,028 | 76,965 | 27,670 |
| ZH | 618,904 | 607,468 | 588,808 | 557,748 | 500,622 | 410,369 | 265,986 | 71,625 | 20,173 |
| Valid | | | | | | | | | |
| 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | |
| FR | 28,123 | 27,881 | 27,535 | 26,799 | 25,621 | 24,000 | 20,231 | 11,646 | 4,647 |
| JA | 27,837 | 27,433 | 26,524 | 24,901 | 21,981 | 16,327 | 10,304 | 5,422 | 1,806 |
| ZH | 27,693 | 27,115 | 26,178 | 24,971 | 22,280 | 18,445 | 10,792 | 4228 | 1,002 |
Table 9: The sizes of our fine-tuning dataset after back-translation filtering with thresholds in increment of 0.1. |
yuasa-etal-2023-multimodal | Multimodal Neural Machine Translation Using Synthetic Images Transformed by Latent Diffusion Model | https://aclanthology.org/2023.acl-srw.12 | This study proposes a new multimodal neural machine translation (MNMT) model using synthetic images transformed by a latent diffusion model. MNMT translates a source language sentence based on its related image, but the image usually contains noisy information that are not relevant to the source language sentence. Our proposed method first generates a synthetic image corresponding to the content of the source language sentence by using a latent diffusion model and then performs translation based on the synthetic image. The experiments on the English-German translation tasks using the Multi30k dataset demonstrate the effectiveness of the proposed method. | # Multimodal Neural Machine Translation Using Synthetic Images Transformed By Latent Diffusion Model
Ryoya Yuasa1 Akihiro Tamura1 **Tomoyuki Kajiwara**2 Takashi Ninomiya2 **Tsuneo Kato**1 1Doshisha University 2Ehime University
{ctwh0190@mail4, aktamura@mail, tsukato@mail}.doshisha.ac.jp
{kajiwara, ninomiya}@cs.ehime-u.ac.jp
## Abstract
This study proposes a new multimodal neural machine translation (MNMT) model using synthetic images transformed by a latent diffusion model. MNMT translates a source language sentence based on its related image, but the image usually contains noisy information that are not relevant to the source language sentence. Our proposed method first generates a synthetic image corresponding to the content of the source language sentence by using a latent diffusion model and then performs translation based on the synthetic image. The experiments on the English-German translation tasks using the Multi30k dataset demonstrate the effectiveness of the proposed method.
## 1 Introduction
Recently, multimodal neural machine translation
(MNMT) (Specia et al., 2016), which uses images in addition to source language sentences for translation, has attracted attention in the field of machine translation (MT). Images related to source language sentences are considered to improve translation performance by resolving ambiguity during translation and complementing information that is difficult to capture with source language sentences. However, a source language sentence often only describes one aspect of the contents included in its related image.
Figure 1 shows an example from a standard dataset in MNMT, the Multi30k dataset (Elliott et al., 2016). As shown in Figure 1, multiple source language sentences with differing content are associated with a single image in the Multi30k. For example, Source Language Sentence 2 does not mention the house in the related image. Therefore, related images are not necessarily optimal as auxiliary information for MT.
Therefore, in this study, we propose a new MNMT model using a synthetic image generated 76 by image conversion with a latent diffusion model. Specifically, an original related image is converted with a latent diffusion model based on its source language sentence; content unrelated to the source language sentence is eliminated from the original image, and an image conforming with the source language sentence is generated. Subsequently, translation is performed by using the converted synthetic image instead of the original related image.
Our aim is to improve translation performance by using related images that better reflect the content of source language sentences as auxiliary information for translation.
We verified the effectiveness of our proposed method on the English-German translation tasks using the Multi30k dataset (Elliott et al., 2016)
and the Ambiguous COCO dataset (Elliott et al.,
2017). The results confirmed that, compared with a conventional MNMT using the original related images in the Multi30k, our method improved the BLEU score by 0.14 on both the Multi30k Test 2016 and Test 2017, and by 0.39 on the Ambiguous COCO. Additionally, CLIPScore (Hessel et al.,
2021), which was used to calculate the similarity between a source language sentence and an image, confirmed that the synthetic images used in our method more closely match the source language sentences than the original related images.
## 2 Conventional Mnmt Models
MNMT models based on Transformer (Vaswani et al., 2017) have recently become mainstream in the field of MNMT. Various attempts have been made to improve their translation performance, including the introduction of visual attention mechanisms (Nishihara et al., 2020), as well as the method of simultaneously learning feature representations of text and images using a shared encoder (Elliott and Kádár, 2017). Li et al. (2022)
have proposed a Transformer MNMT model incorporating Selective Attention, an attention mecha-
![1_image_0.png](1_image_0.png)
nism that captures relationships between words in a source language sentence and patches of its related image. We outline the Selective Attention MNMT
model, which is used as the base MNMT model in this study, below.
The Selective Attention MNMT model first encodes the source language sentence Xtext and the related image Ximg into feature expressions Htext and Himg by Eqs. (1) and (2), respectively.
$$\begin{array}{c}{{H^{\mathrm{text}}=\mathrm{TextEncoder}(X^{\mathrm{text}}),}}\\ {{{}}}\\ {{H^{\mathrm{img}}=W\;\mathrm{ImageEncoder}(X^{\mathrm{img}}),}}\end{array}$$
where W, TextEncoder, and ImageEncoder are the parameter matrix, Transformer Encoder, and Vision Transformer (Dosovitskiy et al., 2021), respectively.
Then, Selective Attention captures relationships between image patches and source words using an attention mechanism as follows:
$$H_{attn}^{\rm img}={\rm Softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V,\tag{3}$$ $0$, $K$, and $V$ are $H_{attn}^{\rm text}$, $H_{attn}^{\rm img}$, and $H_{attn}^{\rm img}$.
where Q, K, and V are Htext, Himg, and Himg, respectively, and dk is the dimension of Htext.
Subsequently, the gated fusion mechanism (Zhang et al., 2020) generates a feature expression Hout that represents the source language sentence and the image while controlling the influence of the image by Eqs. (4) and (5).
$$\begin{array}{l}{{\lambda=\mathrm{Sigmoid}(U H^{\mathrm{text}}+V H_{a t t n}^{\mathrm{img}}),}}\\ {{{}}}\\ {{H^{\mathrm{out}}=(1-\lambda)\cdot H^{\mathrm{text}}+\lambda\cdot H_{a t t n}^{\mathrm{img}},}}\end{array}$$
attn, (5)
where U and V are learnable parameter matrices.
Finally, Hout is input to the Transformer Decoder to generate a translated sentence.
![1_image_1.png](1_image_1.png)
$\blacksquare$
$\sigma_{\phi}$
## 3 Proposed Method
In this section, we propose an MNMT model that uses synthetic images transformed from related images based on source language sentences. Figure 1 shows an overview of the proposed method.
The MNMT dataset consists of the triplets of a source language sentence, a target language sentence, and a related image. In typical MNMT
datasets, each source language sentence usually only represents one aspect of the content included in the related images; there are many cases where content unrelated to the source language sentence exists in the related image. For example, the image in Figure 1 shows a scene where a girl in a pink dress climbs the stairs to enter a wooden house, but Source Language Sentence 1 does not mention the climbing of stairs. Further, Source Language Sentence 2 does not refer to a house. Therefore, related images are not necessarily the best aids to translation.
Accordingly, our proposed method first uses a latent diffusion model to eliminate content unrelated to the source language sentence from the related
$$(4)$$
$\eqref{eq:walpha}$
image and generate a synthetic image that corresponds to the source language sentence (see Section 3.1). Then, translation is performed with a conventional MNMT model (e.g., the Selective Attention MNMT model in our experiments) using the generated synthetic image and the source language sentence. Because this makes it easier to capture the relationship between the input image and text during translation, we expect the improvement of translation performance.
## 3.1 Image Transformation: Latent Diffusion Model
This section explains the latent diffusion model (Rombach et al., 2022) used in the image transformation step of our proposed method.
The latent diffusion model applies the diffusion model (Sohl-Dickstein et al., 2015) to the latent space of VAE (Kingma and Welling, 2014) and consists mainly of the VAE, U-Net (Ronneberger et al., 2015), and a text encoder (see Figure 2). In the latent diffusion model, an input image is projected from pixel space into a low-dimensional latent space using a VAE Encoder to obtain its latent representation. Then Gaussian noise is continuously added to the latent expression by a diffusion process. Next, in a reverse diffusion process, U-Net is used multiple times to gradually remove noise from the latent expression that contained noise. At this time, the U-Net is conditioned by the feature representation generated from a text by the text encoder. This conditioning is realized by a cross attention mechanism. Finally, the VAE decoder projects the denoised latent representation from latent space to pixel space to obtain the output image.
The loss function for the latent diffusion model is given as follows:
$$L_{\mathrm{LDM}}:=\mathbb{E}_{\varepsilon}(x),y,\epsilon\sim\mathcal{N}(0,1),t\big[\|\epsilon-\epsilon_{\theta}(z_{t},t,\tau_{\theta}(y))\|_{2}^{2}\big],$$
where ε, ϵθ, and τθ represent a VAE encoder, an U-Net, and a text encoder, respectively, and x, y, ϵ, t, and zt are an input image, a text, a Gaussian noise, time, and the latent representation of time t, respectively.
In our proposed method, a source language sentence and its related image are input to the text encoder and the VAE encoder, respectively, to convert the related image into a synthetic image that conforms to the source language sentence.
## 4 Experiments 4.1 Experimental Setup
We verified the effectiveness of the proposed method on the English-German translation tasks using the Multi30k and the Ambiguous COCO. We used the Multi30k training data (29,000 triplets)
and the Multi30k validation data (1,014 triplets) as our training and validation data, and used the Multi30k Test 2016 (1,000 triplets), the Multi30k Test 2017 (1,000 triplets), and the Ambiguous COCO (461 triplets) as our test data.
We compared the translation performance of our proposed method (*MNMT(conv.)*) with the translation performance of 1) an NMT model that does not use related images (NMT); 2) an MNMT model that uses original images from the dataset as related images (*MNMT(orig.)*); 3) and an MNMT model that uses images generated only from source language sentences as related images (*MNMT(gen.)*).
Transformer-Tiny1 was used as the NMT model.
This model, with a reduced number of layers, size of hidden layers, number of attention mechanism heads, etc., as compared to typical Transformer models, is suitable for small-scale datasets.2 According to Wu et al. (2021), we set the number of encoder and decoder layers, the size of the hidden layer, the input size of the feed-forward layer, the number of attention mechanism heads, the dropout, and the label smoothing weight to 4, 128, 256, 5, 0.3, and 0.1, respectively. Adam (Kingma and Ba, 2015) was used as the optimization method, with β1 = 0.9 and β2 = 0.98. The learning rate was linearly warmed up from 1e−7to 5e−3 over the first 2,000 steps, and then it was decreased proportionally to the number of updates. The vocabulary dictionary was shared between the source language and the target language, and created by Byte Pair Encoding (Sennrich et al., 2016) with 10,000 merge operations.
The Selective Attention MNMT3 was used as the MNMT model. As for Vision Transformer, vit_base_patch16_3844 was used for image feature extraction. Stable Diffusion,5 based on a latent
Model Test Test Ambiguous
2016 2017 COCO
NMT 40.50 31.31 27.81
MNMT(orig.) 41.06 32.06 27.91
MNMT(gen.) 40.81 31.81 **28.54**
MNMT(conv.) 41.20 **32.20** 28.30
Table 1: Translation Performance (BLEU [%])
Model Test Test Ambiguous
2016 2017 COCO
MNMT(orig.) 79.59 78.32 78.17
MNMT(conv.) 79.74 79.35 80.08
Table 2: CLIPScore: Similarity between Source Language Sentences and Related Images diffusion model, was adopted for the generation of related images in *MNMT(gen.)* and the image transformation in *MNMT(conv.)*; the specific model used was stable-diffusion-v1-5.6 StableDiffusionPipeline and StableDiffusionImg2ImgPipeline from diffusers,7 were used for implementation.
For image generation in *MNMT(conv.)* and MNMT(gen.), we used the default parameters. We set guidance_scale and num_inference_steps to 7.5 and 50 for *MNMT(gen.)*, and guidance_scale and strength to 7.5 and 0.8 for *MNMT(conv.)*. The hyperparameters, optimization methods, and vocabulary dictionary creation methods during training were the same as the settings used for the NMT
model.
In decoding for all models, we averaged checkpoints at the last 10 epochs before the end of training, and used beam search with a beam width of 5.
BLEU (Papineni et al., 2002) was used as the evaluation measure. We trained the models with five different random seeds, and evaluated the model with the highest BLEU on the validation data.
## 4.2 Results
Table 1 shows the experimental results. As Table 1 shows, the three MNMT models using image information have higher BLEU scores across all datasets than the NMT model that does not use image information. This confirms that image information helped improve translation performance on the datasets used in our experiments.
Further, a comparison of the three MNMT
models shows that our proposed *MNMT(conv.)*
achieved the highest translation performance on Test 2016 and Test 2017. *MNMT(gen.)* had a higher translation performance than *MNMT (conv.)* on Ambiguous COCO, but overall, *MNMT (conv.)*
had better results, confirming the effectiveness of the proposed method.
## 5 Discussion
This section analyzes the synthetic images used in the proposed method. Examples of transformed images are shown in Appendix A. In order to investigate how much of the image corresponds to the source language sentence, we computed ClipScore (Hessel et al., 2021), which measures the similarity between the image used and the source language sentence by using CLIPScore(c, v) =
w · max(cos(c, v), 0), where c and v are the feature vectors from the text encoder and the image encoder of the CLIP (Radford et al., 2021), respectively. w is used to rescale the output, and following Hessel et al. (2021), we set it to 2.5.
The evaluation results are shown in Table 2. The table shows that the synthetic images converted by our proposed method have a higher similarity to the source language sentences than the original related images across all datasets. In particular, the largest improvement (+1.91 CLIPScore) has been observed on Ambiguous COCO, which includes more ambiguity than the other two test datasets.
These results confirm that related images which better reflect the source languages can be used as aids to translation via our proposed method.
## 6 Conclusion
In this study, we proposed a new MNMT model that uses a latent diffusion model to transform related images into synthetic images that more closely conform to source language sentences and uses the transformed images as auxiliary information for MT. The experiments on the English-German translation tasks using the Multi30k dataset showed that the proposed method can achieve higher translation performance than conventional methods, demonstrating the effectiveness of our proposed method.
The evaluation using CLIPScore confirms that the images used in our method possess more similarities to the source language sentences than the original images.
## Limitations
In this work, we confirm the effectiveness of the proposed method only on the English-German translation tasks using the Multi30k dataset, the most commonly used dataset in the MNMT reserach area. It is not clear whether the proposed method is effective for translation for language pairs other than English and German or translation when a larger training dataset is used (e.g., when using an existing data augmentation method for MNMT). We will leave these verification experiments for future work.
The proposed method has improved translation performance of MT, but the performance is not perfect and translation results could include translation errors. Accordingly, there still remains a possibility that translation results by the proposed method could convey incorrect information.
The proposed method requires an additional process for transforming images, compared with conventional MNMT models. The experiment, including model training and testing, on the proposed model *MNMT(conv.)* took about 20 hours longer than that on the baseline MNMT model MNMT(orig.) when using RTX3090 GPU × 1.
## Acknowledgements
This work was supported by JSPS KAKENHI
Grant Number JP22K12177, JP21K12031. These research results were partially obtained from the commissioned research (No. 225) by National Institute of Information and Communications Technology (NICT), JAPAN.
## References
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on* Learning Representations.
Desmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine translation and multilingual image description. In *Proceedings of the Second Conference on Machine Translation*, pages 215–233, Copenhagen, Denmark. Association for Computational Linguistics.
Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30K: Multilingual English-
German image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70–
74, Berlin, Germany. Association for Computational Linguistics.
Desmond Elliott and Ákos Kádár. 2017. Imagination improves multimodal translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 130–141, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. CLIPScore: A
reference-free evaluation metric for image captioning.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7514–7528, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *International* Conference on Learning Representations (Poster).
Diederik P. Kingma and Max Welling. 2014. AutoEncoding Variational Bayes. In 2nd International Conference on Learning Representations.
Bei Li, Chuanhao Lv, Zefan Zhou, Tao Zhou, Tong Xiao, Anxiang Ma, and JingBo Zhu. 2022. On vision features in multimodal machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 6327–6337, Dublin, Ireland. Association for Computational Linguistics.
Tetsuro Nishihara, Akihiro Tamura, Takashi Ninomiya, Yutaro Omote, and Hideki Nakayama. 2020. Supervised visual attention for multimodal neural machine translation. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 4304–4314, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In *Proceedings of the 38th International* Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference*
on Computer Vision and Pattern Recognition, pages 10684–10695.
Olaf Ronneberger, Philipp Fischer, and Thomas Brox.
2015. U-net: Convolutional networks for biomedical image segmentation. In *Medical Image Computing* and Computer-Assisted Intervention - MICCAI 2015, pages 234–241.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In *Proceedings of the 32nd International* Conference on Machine Learning, volume 37 of *Proceedings of Machine Learning Research*, pages 2256–
2265.
Lucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In *Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers*,
pages 543–553, Berlin, Germany. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30.
Zhiyong Wu, Lingpeng Kong, Wei Bi, Xiang Li, and Ben Kao. 2021. Good for misconceived reasons: An empirical revisiting on the need for visual context in multimodal machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 6153–6166, Online.
Association for Computational Linguistics.
Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, and Hai Zhao.
2020. Neural machine translation with universal visual representation. In International Conference on Learning Representations.
## A Appendix
![6_image_0.png](6_image_0.png)
|
wang-etal-2023-enhancing | Enhancing {A}ncient {C}hinese Understanding with Derived Noisy Syntax Trees | https://aclanthology.org/2023.acl-srw.15 | Despite the rapid development of neural-based models, syntax still plays a crucial role in modern natural language processing. However, few studies have incorporated syntactic information into ancient Chinese understanding tasks due to the lack of syntactic annotation. This paper explores the role of syntax in ancient Chinese understanding based on the noisy syntax trees from unsupervised derivation and modern Chinese syntax parsers. On top of that, we propose a novel syntax encoding component {--} confidence-based syntax encoding network (cSEN) to alleviate the side effects from the existing noise caused by unsupervised syntax derivation and the incompatibility between ancient and modern Chinese. Experiments on two typical ancient Chinese understanding tasks, ancient poetry theme classification and ancient-modern Chinese translation, demonstrate that syntactic information can effectively enhance the understanding of ancient Chinese over strong baselines, and that the proposed cSEN plays an important role in noisy scenarios. | # Enhancing Ancient Chinese Understanding With Derived Noisy Syntax Trees
Shitou Zhang1,2, Ping Wang1,∗, Zuchao Li2,∗**, Jingrui Hou**3 1School of Information Management, Wuhan University 2School of Computer Science, Wuhan University 3Department of Computer Science, Loughborough University
{shitouzhang,wangping,zcli-charlie}@whu.edu.cn [email protected]
## Abstract
Despite the rapid development of neural-based models, syntax still plays a crucial role in modern natural language processing. However, few studies have incorporated syntactic information into ancient Chinese understanding tasks due to the lack of syntactic annotation. This paper explores the role of syntax in ancient Chinese understanding based on the noisy syntax trees from unsupervised derivation and modern Chinese syntax parsers. On top of that, we propose a novel syntax encoding component - confidence-based syntax encoding network (cSEN) to alleviate the side effects from the existing noise caused by unsupervised syntax derivation and the incompatibility between ancient and modern Chinese. Experiments on two typical ancient Chinese understanding tasks, ancient poetry theme classification and ancient-modern Chinese translation, demonstrate that syntactic information can effectively enhance the understanding of ancient Chinese over strong baselines, and that the proposed cSEN plays an important role in noisy scenarios.
## 1 Introduction
Ancient Chinese literature, such as classical poetry, books, and records, is a highly representative and distinctive cultural heritage that is receiving increasing attention from the NLP academia. However, directly applying modern Chinese processing methods to ancient texts is not appropriate due to the differences in syntax and semantics between ancient and modern Chinese. Chinese is one of the oldest written languages in the world, with a history of at least 6,000 years (Norman, 1988). Over time, the language has undergone many changes, such as the transition from literary to vernacular Chinese in the early 20th century (Weiping, 2017),
resulting in a significant gap between ancient and modern Chinese.
∗ Corresponding authors.
![0_image_0.png](0_image_0.png)
Syntactic features has been utilized in a wide range of NLP tasks, including coreference resolution (Fang and Fu, 2019; Trieu et al., 2019; Jiang and Cohn, 2022), machine reading comprehension
(Zhang et al., 2020; Guo et al., 2020), and machine translation (Currey and Heafield, 2019; Zhang et al.,
2019a; Bugliarello and Okazaki, 2020). Despite the effectiveness of syntax in modern Chinese understanding (Li et al., 2018; Xia et al., 2019; Zhang et al., 2020), few studies have incorporated syntactic information into ancient Chinese processing.
Most works only take into account explicit features, such as era (Chang et al., 2021) and imagery (Shen et al., 2019), ignoring implicit syntactic features.
The main reason for this lies in two aspects: (1) the linguistic gap between ancient and modern Chinese makes it difficult for supervised modern Chinese syntax parsers to correctly parse ancient Chinese expressions; (2) training a supervised ancient Chinese syntax parser from scratch can be highly costly due to the lack of annotated data.
Unsupervised syntax parsing or directly employing modern Chinese parsers will inevitably cause noise and performance degradation. A unlabeled example and corresponding human annotation on ancient Chinese sentence "可怜人似月中孀(*It is* 83 pitiful like Chang'e in the moon)" are shown in Figure 1. To address this challenge, we propose a novel syntax encoding structure - confidencebased syntax encoding network (cSEN), which alleviates the negative effect of noise by measuring confidence of arcs in syntax graphs. Specifically, confidence is calculated by performing Biaffine transformation over the sequence representation and the derived syntactic graph adjacency matrix.
With this obtained confidence, our model is capable of distinguishing useful syntactic features from noise.
Moreover, compared with modern Chinese, ancient Chinese has more concise expressions and thus more compact structures, each token is highly relative to the preceding and following one. Considering such linguistic characteristic, we incorporate another graph feature - left-right branch (LRB),
which captures local features to further improve ancient Chinese understanding. Experiments are conducted on two typical ancient Chinese understanding tasks, thematic classification of ancient poetry and ancient-modern Chinese translation. Results show that our model achieves significant improvements over powerful baselines, and our proposed cSEN can effectively handle the noise in the derived syntax trees. To our best knowledge, our proposed cSEN is the first solution that makes the syntax practical in ancient Chinese processing. The proposed cSEB can serve as a backbone for enriching our understanding of ancient texts, offering a scalable and consistent solution for education, research, and broadening the public's access to these significant cultural treasures.
Overall, the contributions of this paper can be concluded in four folds:
- This study fills the research gap of exploring the role of syntax in ancient Chinese understanding. Our work demonstrates that syntactic information, even noisy parses from unsupervised derivation, can benefit ancient Chinese understanding substantially.
- We propose a novel architecture - confidencebased syntax encoding network (cSEN),
which alleviates the negative effect of noise in syntax parses, thus making it practical to utilize derived syntactic information to enhance ancient Chinese understanding.
- The effectiveness of cSEN is evaluated on two typical ancient Chinese understanding
tasks, ancient poetry thematic classification and ancient-modern Chinese translation. Results show that our model yields significantly better performance in noisy scenarios over powerful baselines.
- We create a new dataset for the thematic classification of ancient Chinese poetry, with 22,360 poems divided into 10 theme categories. This dataset offers a data foundation for related research and helps to eliminate the lack of available ancient Chinese annotated corpora.
## 2 Related Work 2.1 Syntax Role In Modern Chinese Understanding
As syntax is highly correlated with semantics, syntactic features, including constituent and dependency structures, have been utilized in many modern Chinese understanding tasks and have been shown to be helpful clues. Li et al. (2018) explored the effect of syntax on semantic role labeling (SRL)
and confirmed that high-quality syntactic parsing can effectively enhance syntactically-driven SRL. Xia et al. (2019) designed a syntax-aware multi-task learning framework for Chinese SRL
by extracting implicit syntactic representations as external inputs for the SRL model. Jiang et al.
(2018) incorporated syntactic features to expand identified triplets for improving Chinese entity relation extraction. Zhang et al. (2020) proposed a syntax-aware approach for solving machine reading comprehension, which incorporates explicit syntactic constraints into the attention mechanism for better linguistically motivated word representations. Sun et al. (2022) utilized syntactic features, which capture depth-level structure information, including non-consecutive words and their relations, to enhance recognition of Chinese implicit intersentence relations. Zhu et al. (2022) incorporated syntactic dependency information to determine entity boundaries for improving Chinese named entity recognition. Despite the increasing attention that syntax is receiving in modern Chinese understanding, few studies have attempted to utilize syntactic features for ancient Chinese understanding.
## 2.2 Ancient-Modern Chinese Translation
Unlike bilingual translation tasks, such as ChineseEnglish, ancient and modern Chinese are written using the same characters. Despite that, translating between ancient and modern Chinese can still be challenging for native speakers. This is due to two factors: (1) the syntactical structure and grammatical order of ancient Chinese are different from those of modern Chinese, making ancient Chinese expressions more concise yet also more confusing; (2) ancient Chinese frequently employs allusion, metaphor, and symbolic imagery to implicitly evoke sensory and emotional experiences, which increases the complexity of disambiguating the intended message.
In recent years, advancements in deep learning have led to significant progress in neural machine translation. For example, Zhang et al. (2019b) proposed an unsupervised algorithm that constructs sentence-aligned ancient-modern pairs, and an endto-end neural model with copying mechanism and local attention to translate between ancient and modern Chinese. Liu et al. (2019) applied RNNbased (Bahdanau et al., 2014) and Transformerbased (Vaswani et al., 2017) machine translation models to this task. Considering the monolingual nature of this task, Yang et al. (2021) utilized pretrained model UNILM (Dong et al., 2019) and an ancient Chinese pre-trained model Guwen-BERT
to enhance performance. Over time, the Chinese language has evolved a lot, resulting in different characteristics of ancient Chinese in different eras.
To address this, Chang et al. (2021) proposed a time-aware translation method, where the model predicts both the translation results and its particular era, and uses the predicted chronological feature as auxiliary information to bridge the linguistic gap between Chinese language in different eras.
## 2.3 Classification Of Ancient Chinese Poetry
Classification of ancient Chinese poetry provides a basis for higher-level tasks, such as sentiment or style controllable poetry generation (Yang et al., 2018; Chen et al., 2019; Shao et al., 2021). In the past, statistical features and machine learning algorithms were commonly used. For example, Hou and Frank (2015) proposed a weakly supervised sentiment classification approach, which created a sentiment lexicon based on Weighted Personalized PageRank (WPPR). Shen et al. (2019) incorporated imagery features for analyzing the sentiment of Tang Poetry. In recent years, neural classifiers have been introduced to the task and made remarkable progress in performance. For instance, Xuan et al. (2018) designed a poetry style recognition model by stacking a genetic algorithm with CNN,
and Tang et al. (2020) combined CNN with a gated GRU for solving poetry sentiment classification.
## 3 Model
In this section, we describe architecture of the proposed cSEN. We first present a basic GAT encoder, then introduce our cSEN. The overview of cSEN
is shown in Figure 2.
## 3.1 Vanilla Gat
GAT is often applied over a sentence encoder to extract graph-based representations of the input text.
Given input token sequence T = {t1, t2*,...,t*l},
l denotes the sequence length. The output of the sentence encoder is denoted as matrix H ∈ Rl×n, where each row hi ∈ Rn is the representation of token ti.
With dependency structure of the input sequence from a syntax parser, we construct a dependency graph G = (V, E), where V is the set of tokens and E is the set of arcs. In the graph encoding, we employ the form of adjacency matrix to describe the graph, in which the positions with arcs and diagonal are assigned to ones, denoted as M(dep).
Linear transformation is performed by multiplying the sentence representation H with a matrix W ∈
Rn×nfor feature extraction, where n denotes the transformed feature dimension:
Z = HW.
Then, a pair-wise attention operation is performed.
For every pair ti, tj ∈ V, it concatenates corresponding representations zi and zj , then takes the dot product with vector a ∈ R2nand applies a LeakyReLU activation function:
S(raw)[*i, j*] = **LeakyReLU**([zi ⊕ zj ]
Ta),
where ⊕ represents the concatenation operation, and S(raw) is a score matrix with the size of (l × l)
that captures inter-node relations. To integrate the graph structure, the adjacency matrix M(dep) is used to constrain the function scope before a regular **Softmax** operation is performed. By doing this, each token can only attend to its head tokens and itself. The obtained attention weights matrix then is used for scaling the transformed sentence representation Z and calculating the final attentional output:
W(attn) = **Softmax**(S(raw) × M(dep)).
$${\mathcal{H}}^{\mathrm{(attn)}}={\mathcal{W}}^{\mathrm{(attn)}}\,{\mathcal{Z}}.$$
85
![3_image_0.png](3_image_0.png)
## 3.2 Confidence-Based Gat
As discussed above, GAT guides the encoding process by constraining the scope of the attention computation. Therefore, the presence of noise in the graph will inevitably impact the encoding output.
To alleviate the negative effects of noise on the model's performance, we propose a confidencebased GAT, which measures the confidence of the graph adjacency matrix, helping the model distinguish reliable syntactic information from noise.
Similar to vanilla GAT, we first model the pairwise relationships. Two separate linear transformations are performed over the sentence representation H to obtain the role-aware representations.
The outputs are denoted as H(d) and H(h) respectively, both of which have the size of (l × n):
H(d) = HW(d); H(h) = HW(h).
Then, Biaffine attention (Dozat and Manning, 2016) are calculated on the role-aware representations for pair-wise relationship scoring:
S(bi) = H(d)UH(h)T ,
where U is an intermediate matrix with the size of (n × n). Confidence scores are calculated by concatenating the pair-wise relationship scores and the adjacency matrix and passing them through processing as follows,
$$\begin{array}{l}{{\mathrm{use)}=\mathrm{ReLU}(\mathrm{FFNN}^{(\mathrm{fuse})}(\left[{\mathcal{S}}^{(\mathrm{bi})}\oplus{\mathcal{M}}^{(\mathrm{dep})}\right])),}}\\ {{\mathcal{S}}^{(\mathrm{conf})}=\mathrm{Sigmoid}(\mathrm{FFNN}^{(\mathrm{proj})}({\mathcal{S}}^{(\mathrm{fuse})})).}}\end{array}$$
where **FFNN**(fuse) performs a linear transformation to fuse the two feature spaces along with an **ReLU**
activation, and **FFNN**(proj) is used to reduce the dimension from 2l to l, so that **Sigmoid** can be applied to project the confidence features to the same magnitude as the attention scores. With this obtained confidence scores S(conf), we can remedy the original attention restrain process:
W(conf) = **Softmax**(W(attn) + S(conf)),
$\uparrow\;\cup$ .
$$\begin{array}{l l}{{=\mathrm{{\Gamma}}(\mathcal{W}^{\mathrm{{conf}}})}}&{{\neq\mathcal{O}}}\\ {{\mathcal{H}^{\mathrm{(conf)}}=\mathcal{W}^{\mathrm{(conf)}}\,\mathcal{Z}.}}\end{array}$$
In summary, cSEN alleviates the negative effect of noise in graphs through a two-fold process. First, cSEN measures the confidence of the derived syntax parses. This confidence score is then used to soft-mask noisy arcs and highlight previously undetected ones. Second, considering the linguistic characteristics of ancient Chinese, the Left-Right Branch feature is incorporated to broaden the scope of syntax graph encoding and smooth out noise and incompatibility. The combined effect of these aspects helps alleviate performance degradation caused by noise.
## 3.3 Left-Right Branch Feature
Inspired by the ubiquity of local dependencies in ancient Chinese, we introduce a novel straightforward and effective feature, left-right branch, to further improve the GAT encoding. To model local inter-token relations, we populate a matrix M(lrb) of the same size as M(dep) following
$${\mathcal{M}}^{\mathrm{(lrb)}}[i,j]={\left\{\begin{array}{l l}{1,}&{{\mathrm{if}}\ j\in\{i-1,i+1\}}\\ {0,}&{{\mathrm{otherwise.}}}\end{array}\right.}$$
This indicates that there exist arcs in the graph connecting the node and its close left and right neighbors. The left-right branch features are encoded using another GAT component, yielding a sequence representation Z(lrb) and a positional-informationintroduced attention weight matrix W(lrb). The outputs from M(dep) and M(lrb) are combined with a gated mechanism to produce the final output:
$$\begin{array}{c}{{{\mathcal{H}}^{\mathrm{(lrb)}}={\mathcal{W}}^{\mathrm{(lrb)}}{\mathcal{Z}}^{\mathrm{(lrb)}}.}}\\ {{g=\mathrm{Sigmoid}(\mathrm{FFNN}^{\mathrm{(gate)}}(\left[{\mathcal{H}}^{\mathrm{(conf)}}\oplus{\mathcal{H}}^{\mathrm{(lrb)}}\right])),}}\\ {{{\mathcal{H}}^{\mathrm{(output)}}=g\times{\mathcal{H}}^{\mathrm{(conf)}}+(1-g)\times{\mathcal{H}}^{\mathrm{(lrb)}}.}}\end{array}$$
## 4 Experiments
We evaluate the effectiveness of cSEN module using two typical ancient Chinese understanding tasks: Thematic classification of ancient poetry and ancient-modern Chinese translation. We build our model by incorporating the cSEN module into existing solid baselines. For the classification task, we follow the work of (Vaibhav et al., 2019) which has a BERT-GAT-BiLSTM backbone architecture. And for the translation task, our model is based on (Jin et al., 2020) where dependency graphs are incorporated into neural sequence-to-sequence models with a pointer network.
## 4.1 Data
To address the scarcity of annotated data for thematic classification, we constructed a novel dataset1. Two graduate students specializing in Chinese literature study annotated 22,360 poems, categorizing them into one of ten distinct themes under the guidance of an experienced ancient Chinese linguist. This meticulous process ensured high-quality, reliable annotations. Any conflicted labelling between the two annotators was resolved through consultation with the supervisor, guaranteeing a consistent annotation standard. The dataset is then randomly divided into a training set (20,360),
a development set (800), and a test set (1,200). The distribution of themes in the dataset is detailed in Table 1.
For the ancient-modern Chinese translation, we adopt the ancient-modern Chinese parallel corpus contributed by the open source NiuTrans project2.
The corpus contains 967,255 sentence pairs extracted from ancient Chinese books. We divided
| Train | Dev | Test | |
|------------------|-------|--------|------|
| #Object-chanting | 1129 | 47 | 66 |
| #Landscape | 1097 | 44 | 47 |
| #Persons | 2403 | 91 | 129 |
| #History | 1087 | 40 | 76 |
| #Homesickness | 9013 | 357 | 522 |
| #Mourning | 503 | 18 | 31 |
| #War | 1746 | 62 | 115 |
| #Pastoral | 1219 | 47 | 84 |
| #Farewell | 1460 | 60 | 83 |
| #Boudoir-plaint | 703 | 34 | 47 |
| Total | 20360 | 800 | 1200 |
the corpus into training, validation, and test sets with corresponding sizes of 900,000, 60,000, and 7,255.
## 4.2 Syntax Parsing
We experiment with two settings - modern supervised parsers and ancient unsupervised syntax derivation. For modern supervised parsing, we adopt the Biaffine dependency parse (Dozat and Manning, 2016) and train it on CTB7 (Xue et al.,
2010). For unsupervised syntax derivation, we follow the work of Wu et al. (2020), which utilizes linguistic knowledge gained from pre-trained language model BERT to infer syntactic dependency structure without direct supervison. We attempt two variants of BERT for syntax derivation and backbone sentence encoder, BERT-wwm-ext (Cui et al., 2021) and Anchi-BERT (Tian et al., 2021).
BERT-wwm-ext is trained on the modern Chinese corpus containing 5.4B words, while Anchi-BERT
is trained upon a ancient Chinese corpus with the size of 39.5M tokens. In addition, we treat the left-right branch as a special kind of syntax parses.
Anchi-BERT is trained on a smaller ancient Chinese corpus (39.5M tokens), while BERT-wwm-ext is trained on a larger modern Chinese corpus (5.4B
tokens). We also treat left-right branch features as a distinct class of syntax parses.
For clarity, the syntactic parses from the Biaffine parser, BERT-wwm derivation, and Anchi-BERT derivation are denoted as BiAF, WWMD, *ANCD*
respectively, in the following part.
## 4.3 Implementation And Hyper-Parameters
For the thematic classification, our model is built by stacking BERT, a graph encoder, and a single-layer LSTM. For the baseline, we do not incorporate syn-
| BERT-wwm | Anchi-BERT | | | | |
|---------------|--------------|----------|----------|----------|----------|
| Methods | Parses | Micro F1 | Macro F1 | Micro F1 | Macro F1 |
| Baseline | None | 91.7 | 89.2 | 92.4 | 90.4 |
| LRB | 91.5 | 88.9 | 93.3 | 91.4 | |
| BiAF | 92.3 | 89.7 | 93.3 | 91.2 | |
| WWMD | 91.4 | 88.8 | 92.7 | 90.8 | |
| GAT | ANCD | 91.8 | 89.2 | 93.2 | 91.0 |
| BiAF+LRB | 92.7 | 90.4 | 93.3 | 91.2 | |
| WWMD+LRB | 91.7 | 89.6 | 93.2 | 91.2 | |
| ANCD+LRB | 90.8 | 88.2 | 92.8 | 90.7 | |
| BiAF+ANCD+LRB | 91.7 | 88.8 | 92.6 | 90.6 | |
| BiAF+LRB | 91.4 | 89.2 | 93.3 | 91.6 | |
| WWMD+LRB | 92.8 | 90.7 | 93.6 | 91.9 | |
| cSEN | ANCD+LRB | 91.3 | 89.1 | 93.2 | 91.3 |
| BiAF+ANCD+LRB | 91.0 | 89.1 | 93.8 | 91.9 | |
| Methods | Parses | BLEU | RG-1 F-score | RG-2 F-score | RG-L F-score |
|---------------|---------------|--------|----------------|----------------|----------------|
| Baseline | None | 37.14 | 69.71 | 46.24 | 67.62 |
| LRB | 37.42 | 69.86 | 46.36 | 67.72 | |
| BiAF | 37.45 | 70.23 | 46.93 | 68.21 | |
| WWMD | 37.46 | 70.20 | 46.89 | 68.14 | |
| GAT | ANCD | 37.55 | 69.90 | 46.53 | 67.85 |
| BiAF+ANCD+LRB | 34.62 | 69.20 | 45.15 | 67.15 | |
| cSEN | BiAF+ANDC+LRB | 37.73 | 70.27 | 47.09 | 68.23 |
tax parses, rendering the graph encoder ineffective in shaping the attention scope. The graph encoder's node embedding dimension is set to 128, and the hidden size in LSTM is set to 100. We adopt the Adam optimizer with ρ = 5e − 5 and = 1e − 8, using a batch size of 32. All classifiers are trained for 10 epochs on the train set by default.
We mostly follow the parameter settings from
(Jin et al., 2020) for the ancient-modern Chinese translation. The Adam optimizer is configured with ρ = 1e − 4 and = 1e − 8. And all models are trained for 50 epochs with a batch size of 108.
## 4.4 Results 4.4.1 Ancient Poetry Thematic Classification
Table 2 presents the results of ancient poetry thematic classification. We report the results in MicroF1 and Macro-F1 scores. The table is divided into three blocks, showing the results of the baseline model, vanilla GAT, and the proposed cSEN. The baseline model achieves 92.4 in Micro F1 and 90.4 in Macro F1, showing strong performance.
From the results in the first two blocks, it can be found that incorporating syntactic trees with GAT
encoder brings substantial improvement, proving the value of syntactic information for enhancing ancient Chinese understanding. Through comparing the results of employing Anchi-Bert as the sentence encoder and those obtained employing Bert-wwm, we can see that Anchi-Bert outperforms BERTwwm with a significant lead in all cases. Recall that Anchi-Bert was pre-trained on a much smaller corpus. Also, the performance of syntactic trees derived by BERT-wwm is inferior to the other three.
This once more indicates the linguistic gap and syntactic incompatibility between ancient and modern Chinese.
Unsupervised syntax trees derived by AnchiBERT performs roughly the same as those produced by the Biaffine parser. Additionally, LRB
is the best-performing syntax parse among all, improving the performance by 0.9 in Micro F1 and 1.0 in Macro F1. It can be partially explained by the fact that ancient poems are comprised by a few brief sentences, which are highly concise and structurally compact. This results in fewer long-range dependencies, and each token is closely dependent on the immediate preceding or succeeding token.
From the third block, it can be seen that when using Anchi-BERT as sentence encoder, cSEN brings
| Variants | Micro F1 | Macro F1 |
|----------------|------------|------------|
| cSEN | 93.8 | 91.9 |
| w/o Confidence | 92.8 | 91.1 |
| w/o Gate | 93.0 | 91.0 |
Table 4: Ablation study results.
| Syntax Trees | Micro F1 | Macro F1 |
|---------------------|------------|------------|
| [ANCD]+(LRB) | 93.2 | 91.3 |
| [BiAF]+(LRB) | 93.3 | 91.6 |
| [BiAF + LRB]+(ANCD) | 92.8 | 90.9 |
| [ANCD + LRB]+(BiAF) | 92.8 | 90.5 |
| [BiAF + ANCD]+(LRB) | 93.8 | 91.9 |
performance gains across all syntax trees setups, raising the top Micro and Macro F1 scores to 93.8 and 91.9, respectively. This demonstrates that: (1)
cSEN's denoising capability is effective for utilizing noisy syntactic information to improve ancient Chinese understanding; (2) cSEN can handle noise introduced by different parses, whether it is from a supervised modern Chinese parser or unsupervised derivation.
## 4.4.2 Ancient-Modern Chinese Translation
Results of the ancient-modern Chinese translation are shown in Table 3. We use BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) scores for performance evaluation. The baseline model without syntax parses achieves 37.14 in BLEU score and F-scores of 69.71, 46.24, 67.62 in ROUGE1, ROUGE-2, and ROUGE-L respectively. With single syntactic parses incorporated, all models achieve better performance in all metrics, proving that syntax can effectively improve ancientmodern Chinese translation. LRB is relatively the weakest one, slightly increasing BLEU score by 0.28, and ROUGE f-scores by 0.15, 0.12, 0.10.
This might be caused by that sentences from the ancient books have more long-distance dependencies and more complicated syntactic structures that left right branch can not recover. Anchi-BERT derived syntax parses have better performance with an improvement of 0.41 in BLEU score, and 0.19, 0.29, and 0.23 in ROUGE scores. BERT-wwm derived syntax trees and trees generated by Biaffine parser have similar results. In contrast to AnchiBERT derived trees, their performance are inferior in BLEU scores but better in ROUGE F-scores.
Feeding multiple syntactic parses into the GATbased model simultaneously leads to a significant performance drop. While replacing GAT with the proposed cSEN increases performance in all metrices, with 37.73 in BLEU score and 70.27, 47.09, 68.23 in ROUGE F-scores. From the above results, we conclude that syntax parses from unsupervised derivation or modern Chinese syntax parsers introduce noise and degrade model performance. With our confidence learning, model is able to distinguish and separate informative syntactic information from noise, thus alleviating its negative effect.
Table 6 shows three ancient-to-modern Chinese translation examples produced by different models.
From generations for Sent 1, we can see a common error: due to the lack of contextual information, all three models assume the surname of "the father" useing the most common Chinese surnames, such as "Li" and "Zhang". For Sent 2, the generations from the baseline model and vanilla GAT differ significantly from the human-annotated reference.
They fail to recognize the relationship between the characters,such as who "其娣" refers to, thus generating tranlations that did not correspond to the facts. In contrast, with stronger denoising capability, cSEN is able to correctly encodes the information in ancient Chinese texts, thus producing higher-quality translations.
## 5 Exploration
In this section, we investigate the impact of different cSEN components and analyze the nature of different syntax parses.
First, we conduct ablation studies on cSEN. Results are reported in Table 4. Both the removal of the confidence (equivalent to vanilla GAT) and the removal of the gated mechanism lead to significant performance degradation. This indicates that both modules are crucial for effectively utilizing syntactic information from noisy parses.
Second, we experiment with different syntax tree combination setups. Table 5 lists the findings.
Combining two syntax parsers onto a single dependency graph can provide richer syntactic information and produce higher performance than using alone syntax parses, no matter it is from unsupervised derivation or a supervised parser. We also explore the incorporation position of LRB features.
It can be observed that the model performance suffers significantly if LRB features and graph parses Sent Ancient Chinese
src: 父岘,少骁勇,事章大宝为帐内部曲。
1 ref: 父亲陈岘,年轻时骁勇,侍奉章大宝,任帐内部曲。
trl: His father, Chen Xian, was brave when he was young. He served Zhang Dabao, as an in-tent soldier.
src: 晋伐骊戎,骊戎男女以骊姬。归生奚齐。其娣生卓子。
2 ref: 晋国攻打骊戎,骊戎男把骊姬献给晋献公,回国后生了奚齐,她的妹妹生了卓子。
trl: Jin state attacked Li Rong. Li Rong's men presented Li Ji to Jin Xian Gong. After returning the state, Li Ji gave birth to Xi Qi, and her sister gave birth to Zhuo Zi.
src: 及竣事,又曰:杨存中之罢,朕不安寝者三夕。
3 ref: 等到事毕后,又说:杨存中被罢免,我有三天不得安寝。
trl: After the matter was over, he said: "The dismissal of Yang Cunzhong, made me unable to sleep for three days."
Baseline 1 父亲张岘 ,少年时骁勇善战,在章大宝帐下当兵, 任章大宝的部曲。
His father, Zhang Xian, was brave and good at fighting when he was young. He served as a soldier under the command of Zhang Dabao, and was Zhang Dabao's soldier.
2 晋国讨伐骊戎,骊戎男女献上骊姬,回国后生了奚齐,骊戎的妹妹生了卓子。
Jin state attacked Li Rong, and Li Rong's men and women presented Li Ji, and after returning the state, they gave birth to Xi Qi, and Li Rong's sister gave birth to Zhuo Zi.
3 等到完成任务,又说:杨存中被罢免,朕三夜不安寝。
After completing the task, he said: "The dismissal of Yang Cunzhong, made me unable to sleep for three nights."
Vanilla GAT
1 父亲李岘 ,少年时骁勇善战,在章大宝手下做事。
His father, Li Xian, was brave and good at fighting when he was young, and worked under Zhang Dabao.
2 晋国讨伐骊戎,骊戎男娶了骊姬,嫁给了奚齐,奚齐的妹妹生了卓子。
Jin state attacked Li Rong. Li Rong's men married Li Ji, married Xi Qi, and Xi Qi's sister gave birth to Zhuo Zi.
3 等到完工 ,又说:杨存中被罢免的事,我感到不安宁 的有三个晚上。
After the work was finished, he said: I felt uneasy about Yang Cunzhong's dismissal for three nights.
cSEN
1 父亲陈岘,年轻时骁勇,事奉章大宝任帐内部曲。
trl: His father, Chen Xian, was brave when he was young. He served Zhang Dabao as an in-tent soldier.
2 晋国攻打骊戎,骊戎男把骊姬送给晋国,回国后生了奚齐,她的妹妹生了卓子。
Jin state attacked Li Rong. Li Rong's men presented Li Ji to Jin State. After returning the state, Li Ji gave birth to Xi Qi, and her sister gave birth to Zhuo Zi.
3 等到事情完毕,又说:杨存中被罢免,我三天不安寝。
trl: After the matter was over, he said: "The dismissal of Yang Cunzhong, made me unable to sleep for three days."
are directly merged together. This again indicates the necessity of our gated method for LRB feature integration.
Third, as illustrated in Figure Figure 3, we compare our model and baselines over different input lengths. cSEN performs better in relative longer sentences, according to the results. This supports the hypothesis that syntax helps guide longer sentence understanding as dependency reduces the distance. Because of the incompatibility between modern and ancient Chinese, unsupervised derivation is more effective than supervised parsing when compared to other syntax parsers. In most cases, cSEN
yeilding better performance due to its stronger denoising capabilities.
## 6 Conclusions
In this paper, we investigate the role of syntax in improving ancient Chinese understanding. Due to lack of syntax annotation, syntax trees are obtained by unsupervised derivation and supervised modern Chinese parser. To alleviate the negative effect of noise, we propose a confidence-based syntax encoding network (cSEN). Experimental results on
![7_image_0.png](7_image_0.png)
two typical ancient Chinese understanding tasks show that our model can effectively distinguish informative syntactic information from noise and achieve better performance. The application of our proposed cSEN can enhance the accessibility of ancient Chinese resources by offering a scalable and consistent solution for mining semantic information of ancient Chinese texts.
## Limitations
The main limitation of our study comes from the extra parameters caused by confidence calculation, in which two separate self-attention operations and Biaffine transformation are performed. Incremental parameters results in a more time-consuming training process, and a higher hardware demand for storage. To address this issue, we plan to combine parameters from different attentional transformations into shared weight matrices in our future work to reduce the model size.
## Acknowledgements
This paper was partially supported by the National Natural Science Foundation of China [No.
72074171].
## References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. *arXiv preprint* arXiv:1409.0473.
Emanuele Bugliarello and Naoaki Okazaki. 2020. Enhancing machine translation with dependency-aware self-attention. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1618–1627.
Ernie Chang, Yow-Ting Shiue, Hui-Syuan Yeh, and Vera Demberg. 2021. Time-aware ancient chinese text translation and inference. *arXiv preprint* arXiv:2107.03179.
Huimin Chen, Xiaoyuan Yi, Maosong Sun, Wenhao Li, Cheng Yang, and Zhipeng Guo. 2019. Sentimentcontrollable chinese poetry generation. In *IJCAI*,
pages 4925–4931.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*,
29:3504–3514.
Anna Currey and Kenneth Heafield. 2019. Incorporating source syntax into transformer-based neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 24–33.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. *Advances in Neural Information Processing Systems*, 32.
Timothy Dozat and Christopher D Manning. 2016.
Deep biaffine attention for neural dependency parsing. *arXiv preprint arXiv:1611.01734*.
Kong Fang and J Fu. 2019. Incorporating structural information for better coreference resolution. In Twenty-Eighth International Joint Conference on Artificial Intelligence IJCAI-19.
Shaoru Guo, Yong Guan, Ru Li, Xiaoli Li, and Hongye Tan. 2020. Incorporating syntax and frame semantics in neural network for machine reading comprehension. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 2635–
2641.
Yufang Hou and Anette Frank. 2015. Analyzing sentiment in classical chinese poetry. In *Proceedings of* the 9th SIGHUM workshop on language Technology for Cultural Heritage, social sciences, and humanities (LaTeCH), pages 15–24.
Fan Jiang and Trevor Cohn. 2022. Incorporating constituent syntax for coreference resolution. arXiv preprint arXiv:2202.10710.
Yishun Jiang, Gongqing Wu, Chenyang Bu, and Xuegang Hu. 2018. Chinese entity relation extraction based on syntactic features. In *2018 IEEE International Conference on Big Knowledge (ICBK)*, pages 99–105. IEEE.
Hanqi Jin, Tianming Wang, and Xiaojun Wan. 2020.
Semsum: Semantic dependency guided neural abstractive summarization. In *Proceedings of the* AAAI conference on artificial intelligence, volume 34, pages 8026–8033.
Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, and Luo Si. 2018.
A unified syntax-aware framework for semantic role labeling. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 2401–2411.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81.
Dayiheng Liu, Kexin Yang, Qian Qu, and Jiancheng Lv. 2019. Ancient–modern chinese translation with a new large training dataset. *ACM Transactions on* Asian and Low-Resource Language Information Processing (TALLIP), 19(1):1–13.
Jerry Norman. 1988. *Chinese*. Cambridge University Press.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Yizhan Shao, Tong Shao, Minghao Wang, Peng Wang, and Jie Gao. 2021. A sentiment and style controllable approach for chinese poetry generation. In *Proceedings of the 30th ACM International Conference on* Information & Knowledge Management, pages 4784–
4788.
Yabo Shen, Yong Ma, Chunguo Li, Shidang Li, Mingliang Gu, Chaojin Zhang, Yun Jin, and Yingli Shen.
2019. Sentiment analysis for tang poetry based on imagery aided and classifier fusion. In *International* Conference on Artificial Intelligence for Communications and Networks, pages 283–290. Springer.
Kaili Sun, Yuan Li, Huyin Zhang, Chi Guo, Linfei Yuan, and Quan Hu. 2022. Syntax–aware graph convolutional network for the recognition of chinese implicit inter-sentence relations. *The Journal of Supercomputing*, pages 1–24.
Yongrui Tang, Xumei Wang, Peng Qi, and Yan Sun.
2020. A neural network-based sentiment analysis scheme for tang poetry. In *2020 International* Wireless Communications and Mobile Computing
(IWCMC), pages 1783–1788. IEEE.
Huishuang Tian, Kexin Yang, Dayiheng Liu, and Jiancheng Lv. 2021. Anchibert: a pre-trained model for ancient chinese language understanding and generation. In *2021 International Joint Conference on* Neural Networks (IJCNN), pages 1–8. IEEE.
Hai Long Trieu, Anh-Khoa Duong Nguyen, Nhung Nguyen, Makoto Miwa, Hiroya Takamura, and Sophia Ananiadou. 2019. Coreference resolution in full text articles with bert and syntax-based mention filtering. In Proceedings of The 5th Workshop on BioNLP Open Shared Tasks, pages 196–205.
Vaibhav Vaibhav, Raghuram Mandyam Annasamy, and Eduard Hovy. 2019. Do sentence interactions matter? leveraging sentence level representations for fake news classification. arXiv preprint arXiv:1910.12203.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Chen Weiping. 2017. An analysis of anti-traditionalism in the new culture movement. *Social Sciences in* China, 38(2):175–187.
Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020.
Perturbed masking: Parameter-free probing for analyzing and interpreting bert. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 4166–4176.
Qingrong Xia, Zhenghua Li, and Min Zhang. 2019.
A syntax-aware multi-task learning framework for chinese semantic role labeling. *arXiv preprint* arXiv:1911.04641.
Jing Xuan, Zhongshi He, Liangyan Li, Weidong He, Fei Guo, Hang Zhang, and Qiong Wu. 2018. Brainoriented cconvolutional neural network computer style recognition of classical chinese poetry. *NeuroQuantology*, 16(4).
Nianwen Xue, Zixin Jiang, Xiuhong Zhong, Martha Palmer, Fei Xia, Fu-Dong Chiou, and Meiyu Chang.
2010. Chinese treebank 7.0. https://catalog.
ldc.upenn.edu/LDC2010T07. Accessed:
2022-05-20.
Cheng Yang, Maosong Sun, Xiaoyuan Yi, and Wenhao Li. 2018. Stylistic chinese poetry generation via unsupervised style disentanglement. In Proceedings of the 2018 conference on empirical methods in natural language processing, pages 3960–3969.
Zinong Yang, Ke-jia Chen, and Jingqiang Chen. 2021.
Guwen-unilm: Machine translation between ancient and modern chinese based on pre-trained models. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 116–128.
Springer.
Meishan Zhang, Zhenghua Li, Guohong Fu, and Min Zhang. 2019a. Syntax-enhanced neural machine translation with syntax-aware word representations.
arXiv preprint arXiv:1905.02878.
Zhiyuan Zhang, Wei Li, and Qi Su. 2019b. Automatic translating between ancient chinese and contemporary chinese with limited aligned corpora. In CCF
International Conference on Natural Language Processing and Chinese Computing, pages 157–167.
Springer.
Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, and Rui Wang. 2020. Sg-net:
Syntax-guided machine reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9636–9643.
Peng Zhu, Dawei Cheng, Fangzhou Yang, Yifeng Luo, Dingjiang Huang, Weining Qian, and Aoying Zhou. 2022. Improving chinese named entity recognition by large-scale syntactic dependency graph.
IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:979–991. |
gao-emami-2023-turing | The {T}uring Quest: Can Transformers Make Good {NPC}s? | https://aclanthology.org/2023.acl-srw.17 | In this paper, we study the viability of the deployment of language models towards non-playable character (NPC) scripts, by introducing a novel pipeline for the automatic construction of NPC scripts using Transformer-based believable scripts for a variety of game genres and specifications. In addition, we propose a self-diagnosis method inspired by previous work to develop language models, tailored specifically to desirable NPC qualities such as coherency, believability, and degree of repetition. Finally, we propose a new benchmark, called The Turing Quest, which we use to show that the pipeline, when applied to GPT-3, can generate for a variety of game genres and contexts, NPC scripts that can fool judges in thinking they have been written by humans. We believe that these findings can greatly benefit both the gaming industry and its global community of users, since many current games continue to base their NPCs on manually-curated scripts that are resource-demanding and may curb the immersiveness and enjoyment of the user. | # The Turing Quest**: Can Transformers Make Good Npcs?**
Qi chen Gao Brock University 1812 Sir Isaac Brock Way St. Catharines, ON, Canada [email protected]
## Abstract
In this paper, we investigate the potential of using large pre-trained language models to generate non-playable character (NPC) scripts in video games. We introduce a novel pipeline that automatically constructs believable NPC
scripts for various game genres and specifications using Transformer-based models. Moreover, we develop a self-diagnosis method, inspired by prior research, that is tailored to essential NPC characteristics such as coherence, believability, and variety in dialogue. To evaluate our approach, we propose a new benchmark, The Turing Quest, which demonstrates that our pipeline, when applied to GPT-3, generates NPC scripts across diverse game genres and contexts that can successfully deceive judges into believing they were written by humans.
Our findings hold significant implications for the gaming industry and its global community, as the current reliance on manually-curated scripts is resource-intensive and can limit the immersiveness and enjoyment of players.
## 1 Introduction
Over the past decade, there has been a growing interest in applying deep learning models to Natural Language Generation (NLG) for open-domain dialogue systems and conversational agents. In parallel, the gaming industry has been striving to create more immersive experiences for players by enhancing their interactions with non-playable characters
(NPCs). However, the potential of utilizing state-ofthe-art deep learning models, such as Transformerbased models, to create NPC scripts remains largely unexplored.
Pre-trained Transformer-based language models (PLMs) like OpenAI's GPT-3 (Brown et al.,
2020) and ChatGPT (Schulman et al., 2022) have demonstrated impressive conversational abilities
(Milne-Ives et al., 2020). In certain contexts, the text generated by these models can be nearly indistinguishable from human-written text (M Alshater, Ali Emami Brock University 1812 Sir Isaac Brock Way St. Catharines, ON, Canada [email protected] 2022) without the aid of external tools or watermarks (Gambini et al., 2022). The use of these models in real-world applications has been expanding in areas such as customer service automation
(Xu et al., 2017) (Zou et al., 2021), educational conversational agents (Molnár and Szüts, 2018),
and mental health dialogue systems (Abd-Alrazaq et al., 2019).
Despite their growing prevalence, the effectiveness and generalization capabilities of PLMs in various contexts remain uncertain. One such uncharted domain is the creation of "non-playable characters" or NPCs in video games.
When comparing chatbots to NPCs, the latter can be considered as a narrative-driven variant of goal-oriented chatbots. However, NPCs and chatbots serve different purposes and operate in distinct environments. Generating NPC scripts presents unique challenges, as the dialogue must be consistent with the game's plot, genre, and the NPC's character to maintain player immersion and suspension of disbelief (Kerr and Szafron, 2009). According to Lee and Heeter (2015), NPC believability hinges on "the size and nature of the cognitive gap between the [NPC that] players experience and the [NPC] they expect". Players anticipate NPCs with individualized and possibly dynamic traits, which should be reflected in their dialogue.
While incorporating personality into dialogue systems is well-studied (Qian et al., 2017) (Smestad and Volden, 2019) (de Haan et al., 2018), the challenge of generating goal-oriented, believable NPC
scripts that align with a game's narrative and thematic elements, while preserving player immersion, remains substantial.
The ability to automatically generate contextually appropriate dialogue for a specified character could have an effect on the design paradigms of future video games. While manually scripted narratives and plot points will continue to hold their value, developers could augment player immersion 93
![1_image_0.png](1_image_0.png)
by allowing an array of NPCs to dynamically respond to a player's in-game progression. Traditionally, game design involves scripted dialogues only for NPCs that contribute directly to a quest or story line, thereby limiting the extent of player interaction. It is not often possible for a player to initiate a conversation with a companion about an ongoing quest or solicit their views, creating an impression that, from an NPC's perspective, the player's existence is confined to the quests they undertake.
Simply implementing an interactive companion system necessitates writing dialogues for every quest for all possible companions—a laborintensive task. Expanding this system to encompass a majority of a game's NPCs would further compound these challenges, increasing the amount of labour to an unreasonable degree. The vast amount of dialogue required for each narrative stage would significantly exceed typical time and resource constraints of most developers. Despite the potential enrichment of the player experience, the practicality of creating such an immersive, dialogue-rich environment using solely human-authored dialogue in game development remains questionable.
In this study, we investigate the application of Transformer-based models like GPT-3 to the task of creating NPCs and generating believable scripts. To this end, we develop an NPC construction pipeline capable of generating dialogue based on the NPC's attributes alone. Our pipeline comprises three key modules: a) a *Feature Characterization Schema* that classifies NPCs based on personality traits and world descriptions, b) an *Automatic Prompt Creation* process that employs the schema to generate tailored prompts for conditioning language models, and c) a *Dialogue Generation* phase that uses the customized prompts to generate scripts with Transformer-based PLMs. Figure 1 provides an example of dialogue generated through this pipeline. We also devise and automate an evaluation metric for NPC dialogue quality, drawing inspiration from related literature (Brown et al.,
2020). Lastly, we propose the Turing Quest: a test using human judges to assess the believability and quality of generated NPC scripts.
## 2 Related Work
In recent years, there has been a growing interest in dialogue systems and conversational agents.
However, the exploration of dialogue generation for NPCs in video games, despite their similarities to chatbots, remains limited. Although most video games in the past decade include NPC dialogue, research on automating its creation using Artificial Intelligence (AI) is still in its infancy.
NPC Dialogue generation. In the early 2000s, efforts in NLP to create better NPC dialogue relied on hand-crafted algorithms and manually authored grammars (Schlünder and Klabunde, 2013)
(Ryan et al., 2016). Schlünder and Klabunde (2013)
succeeded in generating greetings that players perceived as more polite and appropriate than in-game greetings. However, their rule-based method relied on labor-intensive, discrete human-defined steps that were difficult to scale into full branching conversations. With recent advancements in goaloriented chatbots utilizing machine learning techniques such as reinforcement learning (Liu et al.,
2020) and dialogue generation through deep reinforcement learning (Li et al., 2016) (Li, 2020),
automating NPC dialogue generation becomes increasingly feasible.
The introduction of AI into games has led to the application of various AI techniques and algorithms to enhance gameplay experiences through improved bots (Nareyek, 2004) and adaptive experiences (Raifer et al., 2022). There has been significant research into using machine learning to create bots that provide challenging and entertaining opponents for players (Håkansson and Fröberg, 2021). However, this trend of applying machine learning to different game design tasks does not extend to dialogue generation for NPCs.
Although pre-trained language models such as GPT-3 continue to expand their applicability, generalization remains an unsolved problem. While PLMs like GPT-3 have shown natural language generation capabilities (Topal et al., 2021), research into NLG with Transformer-based models trained on NPC dialogue has revealed that the generated dialogue "compared rather poorly to human-written
[dialogue]" in terms of purpose and coherence
(Kalbiyev, 2022). Nevertheless, generalization difficulty for LMs is not unique to NPC dialogue (Ye et al., 2021). We hypothesize that NPC dialogue is not merely another generalization problem but a distinct task. This hypothesis is supported by the inadequacy of chatbot evaluation metrics (Peras, 2018) when applied to NPC dialogue.
NPC Dialogue Metrics. Metrics proposed for chatbots do not directly translate to suitable metrics for NPC dialogue. While chatbot success is often determined by how "human" they sound and their ability to maintain a conversation with a human
(Turing, 1950), NPC dialogue is always directed and goal-oriented. Generating dialogue for NPCs presents unique challenges compared to text generation in fictional settings. The generated dialogue must be consistent with the game world and the NPC's specific traits and personality, and it should ensure coherence and contextual relevance in relation to the player's input. No test equivalent to the Turing test or its alternatives, such as the Winograd schema (WSC) (Winograd, 1972; Levesque et al., 2011) exists specifically for NPC dialogue.
To our knowledge, there is no standard metric to evaluate the quality of generated NPC dialogue.
One suggested metric for NPC dialogue is "coherence, relevance, human-likeness, and fittingness"
(Kalbiyev, 2022). While coherence, relevance, and human-likeness can be applied to chatbots, fittingness—defined by Kalbiyev (2022) as how well the response fits the game world—is unique to NPCs.
## 3 Npc Construction Pipeline
The objective of the NPC construction pipeline is to automatically generate coherent, contextually appropriate, and engaging utterances for an NPC, given the dialogue history between the NPC
and a player, as well as the contextual information about the NPC and the game. The pipeline consists of three modules, which serve to a) characterize the NPC according to a generalized representation schema that captures crucial information about the NPC's role, personality, and game context, b) generate short prompts based on the characterization, providing contextually relevant pretexts for the language model (LM), and c) generate utterances based on these prompts using an LM optimized for NPC dialogue generation.
## 3.1 Module 1: Feature Characterization Schema
The first module in the pipeline involves developing a schema that characterizes a given NPC according to a number of game- and NPC-relevant features. Identifying the most concise set of features needed to define any NPC is a challenging task, as NPCs not only exhibit vastly different personalities but can also serve different purposes for the player and the game world. For example, in the action role-playing game, "The Elder Scrolls V: Skyrim" (Bethesda Game Studios, 2011), the NPC *Balgruuf the Greater* is a Jarl, i.e., a king or ruler who assigns quests to the player to maintain peace. In contrast, a character like *KL-E-0* from
"Fallout 4" (Bethesda Game Studios, 2015), a robot arms dealer in a post-nuclear apocalyptic world, has little concern for peace. Based on (Warpefelt, 2016), NPCs should possess both a ludic function and a narrative framing for their actions to be coherent and believable. That is, an NPC should fulfill a gameplay or mechanical purpose—i.e., a ludic function—while advancing the narrative through their actions.
To develop a characterization of NPCs that captures their differences across various games and genres, we should consider several important features, such as their relationship and role with respect to the player (e.g., buying and selling, providing quests, etc.) and their individual personality and values. Taking into account narrative purpose, ludic purposes, and the personality and characteristic differences of NPCs, we propose five gamespecific features to characterize and distinguish NPCs:
| Narrative | Ludic function | |
|-----------------|------------------|----|
| World Desc. | D | |
| NPC Role | D | |
| NPC Personality | D | |
| Game State | D | D |
| NPC Objective | D | D |
Each of these five features either fulfills a ludic function or contributes to the game's narrative, and in some cases, a feature serves both purposes. This schema enables us to classify NPCs based on their in-game mechanics (Hunicke et al., 2004) while also capturing their role in the game's story. By incorporating these features into the NPC construction pipeline, we can create NPCs that not only adhere to the context and constraints of the game world but also exhibit distinct and engaging personalities, which can significantly enhance players' immersion and overall gaming experience.
World Description. A world description provides a summary of the story thus far, including information about the game world and its unique characteristics. Without this information, actions, thoughts, and utterances may be incoherent or unfitting, as they lack awareness of the setting and genre.
This may result in dialogue or actions that conflict with the player's expectations. For instance, if Balgruuf from the previous example, originating from a fantasy adventure game, were placed in a sci-fi horror set in space, his actions, appearance, and dialogue would clash with the rest of the game. NPCs become "essentially incomprehensible if they are not framed according to the narrative" (Warpefelt, 2016). Ignoring information related to the setting, genre, and themes present in the NPC's world may affect the believability and fittingness of the NPC. More importantly, the narrative dissonance generated could shatter the willful suspension of disbelief—coined by Samuel Taylor Coleridge (1971)—and break the player's immersion in the game's world and story.
Role. Each unique NPC is created to fulfill a purpose. Continuing from the previous example, Balgruuf primarily functions as a *questgiver*—facilitating the player's progression through the main quest line and occasionally offering side quests to enrich the narrative experience. Omitting his role would fail to represent a critical function of his character. Defining the role of an NPC, whether as a vendor, quest giver, or storyteller, etc., is thus crucial. We selected these roles based on the typology of NPCs and the NPC model proposed in
(Warpefelt, 2016). We adapted the types of NPCs from (Warpefelt, 2016) and simplified the set of NPC types to those that would feasibly have a conversation with the player while also merging entries that were similar in their roles. This resulted in eight types of NPCs, six neutral or friendly roles, and two non-friendly roles, as shown below, in Table 2.
| Metatype | Role Vendor |
|-------------|-----------------------------|
| Functional | Service Provider Questgiver |
| Providers | Story teller |
| Friendly | Ally |
| Companion | |
| Adversaries | Enemy Villain |
The role an NPC occupies influences their expected dialogue. Although these roles are not mutually exclusive within a single NPC (e.g., some NPCs can be vendors at times while providing a quest at another time), at any given point during a dialogue with a player, the NPC occupies only one of these roles.
Personality. To describe any given NPC, it is necessary to elaborate on their personality and unique characteristics that distinguish them from other characters. These characteristics include physical attributes and appearances, psychological and personality traits such as the strength of the *OCEAN*
personality traits proposed in (Digman, 1990), likes and dislikes, etc. This feature focuses on the details of the NPC's character, such as their occupation, beliefs, and other related details. NPCs are characters at their core, making it essential to incorporate these details into their depiction.
Game State. This describes the progression of the game and changes to the NPC's location. The NPC's dialogue may change based on the objectives completed by the player and the current state of the in-game world. The addition of this feature allows us to focus on the NPC during any single time frame during the course of the game. This enables better classification of dynamic NPCs that change over the course of the game and react to the player's actions. This feature also allows specifying details such as the current location of the NPCs and the scope of information the NPC possesses.
Game state serves both a narrative and ludic purpose; for example, a shopkeeper may offer more goods depending on the player's actions, and the NPC's location also aids in framing their actions and dialogue, as a vendor may only offer certain goods in specific towns.
Objective. The NPC Objective is the purpose of the NPC apart from the player. According to Dennett Daniel (1981), *personhood* consists of six different themes: Rationality, Intentionality, Stance, Reciprocity, Communication, and Consciousness.
Providing an NPC with a *role* satisfies intentionality, as each action should be motivated by what the NPC was designed to achieve. However, giving them goals and aspirations allows the NPC to have a *stance* and perhaps even *consciousness* (Kalbiyev, 2022). If a blacksmith's objective is to raise enough money for their family, they should act and speak accordingly. Their actions and dialogue should not solely reflect their personality but also their objective. This feature allows the schema to capture complex and dynamic NPCs with intricate values and goals not fully represented by their role or *personality*. The addition of this feature enables the NPC to have a greater purpose than merely serving as an outlet for exposition or facilitating a game function.
With these features, we propose that each unique NPC can be encapsulated and represented wholly, as shown in figure 2. Each one of these features is independent of one another, allowing for modularity when designing NPCs. However, clashing combinations may still exist regardless of the mod-
| World | A fantasy world of Dragons and magic; Skyrim |
|-------------|--------------------------------------------------------------------------------------------------|
| Role | Questgiver |
| Personality | Nord, Jarl of Whiterun, Loyal, Noble, Blonde, reasonable |
| State | Sitting on throne in dragonsreach. Contemplating the war and recent reports of dragons |
| Goal | The safety and prosperity of the people of whiterun and a solution to the looming dragon threat. |
## Ular Nature Of This Schema. 3.2 Module 2: Prompt Creation
Prompt creation was designed with the feature representation schema in mind. Providing the LM
with sufficient information about an NPC is crucial to ensure that the generated dialogue remains consistent with the character's identity. These requirements are akin to the challenges faced by the feature representation schema. Consequently, the prompt creation module integrates the various features present in the schema and uses them as a prompt. The first line of each prompt begins with the sentence "You are an NPC in a game", followed by optional details such as a name, some details about the world that the NPC inhabits, the role of the NPC, basic personal characteristics, their current state (e.g., sitting outside thinking about their daughter), and finally their goal(s). Most of these categories are optional, except for the NPC type
(i.e., their *role*), which must always be present. By incorporating these features, the prompt creation module empowers users to guide the LM in generating diverse NPCs with individualized personalities, allowing for greater customization without the need for prior fine-tuning or training.
NPC Header. Utilizing this prompt creation method, we created the NPC header, a representative example is depicted in figure 3. This header plays a pivotal role in dialogue generation by providing essential information about the character.
For our needs, we also created a player header using the same information used in the NPC header, guiding the LM to mimic a player's behavior and facilitate automated dialogue generation. The generated player dialogue is less creative and more prone to repetition compared to human-written dia-
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
logue. This issue is beyond the scope of this paper, as our focus lies on NPC dialogue generation.
## 3.3 Module 3: Dialogue Generation
Dialogue generation was executed automatically and iteratively. The prompt was structured as a combination of the header and the current dialogue history. The header section is continually swapped depending on which agent's dialogue—NPC or player—is currently being generated. By placing the header at the top of the prompt and swapping it for the active agent, PLMs can generate dialogue that is coherent with the current speaker and their traits.
First Sentences. In early development-stage results, GPT-3 demonstrated difficulty in generating effective first sentences. Combined with the inherent challenge of generating human-like responses, this led to a significant drop in the overall quality of dialogue—often resulting in both NPC and player generating blank lines or constantly repeating the same responses. A workaround was developed by employing a small set of hand-written first sentences based on the genre and NPC type. This workaround allowed the conversation to avoid immediate repetition while minimizing interference with dialogue generation.
Repetition. In our preliminary testing, we found that PLMs struggle to avoid repetition when the player dialogue is similar to a past query or sentence. This often caused the NPC's response to be similar or even identical to its previous response.
To circumvent this issue, we implemented a dynamic frequency penalty. The dynamic frequency penalty incrementally increases when the NPC or player generates a response that already exists in the conversation. After detecting a repetition and incrementing the frequency penalty, the LM attempts to regenerate with the same prompt, excluding the repeated sentence. This process occurs up to three times or until a new sentence is generated before resetting the frequency penalty to the original value before any increments. This technique significantly reduced overall repetitions and drastically decreased the occurrence of loops appearing early in the conversation.
## 4 Evaluation
To assess the performance of the NPC construction pipeline and the resulting generated dialogue, we designed a comprehensive evaluation metric that examines dialogue quality based on coherency, believability, degree of repetition, alignment of the NPC's dialogue with their role, and fittingness of the NPC's dialogue within their world. These categories draw from and adapt Kalbiyev (2022)'s metric for evaluating video game dialogue. Each metric is assigned a score between one and five, with the sum of these scores indicating the overall quality of the dialogue.
Self-diagnosis harnesses the capacity of Transformer-based language models to detect patterns within text and their few-shot learning performance to enable rapid, automated evaluation of dialogue without prior fine-tuning. We conducted a human evaluation of 66 different NPC
scripts to assess the accuracy and reliability of our self-diagnosis approach. After each conversation was evaluated and scored, we found a correlation between parameters and their average score. By including our full NPC header, we were able to generate dialogue of higher quality. We then conducted a single-blind test where human judges were asked to determine whether an NPC script was generated by AI or written manually by a human.
## 4.1 Self-Diagnosis
We investigated the ability of pretrained language models, such as GPT-3, to understand, evaluate, and diagnose dialogue when given a specific nontrivial query (e.g., "whether an NPC behaved coherently"). Schick et al. (2021) demonstrate that PLMs can identify socially undesirable attributes in text, such as racism and violence. We propose that this self-diagnosis capability is not only applicable to socially undesirable attributes but also enables PLMs to self-diagnose a broader and more general set of attributes, themes, and behaviors without further fine-tuning. For simple questions, such as if a
![6_image_0.png](6_image_0.png)
genre was clearly distinguishable in text, PLMs perform accurately in a zero-shot environment without examples and further guidance. This behavior is supported by Sanh et al. (2022). However, this performance does not hold when dealing with more complicated and potentially subjective questions.
Figure 4: Prompt structure of self-diagnosis.
Our self-diagnosis approach consists of providing examples of different scoring dialogue for each metric that needed further clarification. By scoring dialogue", we mean, for example, giving the LM
a prompt like "What a perfect score looks like" or
"What a 3 should look like". In preliminary tests, we found that simply inputting a script and posing a question led to relatively reliable results; however, the output occasionally did not align with human responses or logic. By formulating the question more precisely and asking for a numeric response rather than a free-form sentence response, we were able to obtain a numeric answer more accurately.
To account for potential variability in the responses, we set the temperature to 0 for each test, yielding a deterministic model devoid of stochastic behavior.
We leveraged the PLM's few-shot learning abilities by adding three examples of different scoring sample dialogue before the prompt. This approach aligns scores obtained through self-diagnosis more closely with human scores on queries that a PLM
would otherwise have difficulties with.
## 4.2 The Turing Quest
To evaluate the performance of our NPC Construction pipeline and the degree to which the resulting generated dialogue appears human-written, we propose a test tailored to NPC dialogue—the Turing Quest. Inspired by the Turing test (Turing, 1950),
the goal of this test is to determine whether a generated NPC script can be distinguished from humanwritten dialogue by human judges. A script passes the Turing Quest if the judge deems it humanwritten, and fails if perceived as AI-generated.
Conducting this test on multiple NPC script samples helps assess the proficiency of state-of-the-art PLMs in generating convincing NPC dialogue.
The Turing Quest is a self-administered questionnaire. For each script, it asks the judge to determine if the NPC's dialogue is written by a human or an AI. Since the scope of this test is to determine the believability of an NPC's dialogue, the player's dialogue can be manually written by a human.
For our test, six NPC scripts were evaluated by 12 individual judges. Four of the six scripts were generated by GPT-3, one was manually written, and the final script was sampled from the game *Skyrim*.
Our test group comprised twelve people familiar with video games and NPCs. From the responses of our judges, we determined the average passing rate was 64.58% for all AI-generated scripts. The best performing generated script had a pass rate of 75%. Interestingly, 75% of judges believed that the dialogue sampled from Skyrim was AI-generated and 50% thought the same for the manually written script. This could highlight the expectations of players regarding the current state and abilities of LMs and conversational agents. These findings provide strong empirical evidence that our pipeline, when applied to PLMs, is capable of producing NPC scripts that resemble and perhaps even surpass human-written NPC dialogue.
## 5 Experiments And Results 5.1 Parameter Search And Model Selection
We conducted a comprehensive random grid parameter search to identify the optimal model and parameters for generating high-quality NPC dialogue. Three key parameters influenced the quality and score of the generated dialogue: the language model, temperature setting, and the integration of our NPC construction pipeline prompt.
Utilizing different versions of GPT-3 (OpenAI's text-davinci-002, text-curie-001, and text-babbage001 models) and a range of temperatures (0 to 1, incremented by 0.1), we compared the quality of dialogue generated with our full prompt and a minimal version without the world description, NPC
Personality, game state, and NPC objective sections. We repeated the experiment with another NPC role to ensure generalizability1.
1The code to reproduce all of our experimental results are available at https://github.com/FieryAced/-NPC-DialogueGeneration.
![7_image_0.png](7_image_0.png)
Our analysis revealed a significant decline in quality from the text-davinci-002 to text-curie-001 models, and an even more pronounced decrease between text-curie-001 and text-babbage-001. This is consistent with recent research which has shown that larger and more complex models, such as GPT3's text-davinci-002 model, have the ability to learn and generalize more complex patterns from larger and more diverse datasets, resulting in better performance across a wide range of natural language processing tasks (Brown et al., 2020).
Furthermore, the recently proposed InstructGPT
framework by Ouyang et al. (2022) allows for targeted fine-tuning of pre-trained language models to better suit the task at hand. This approach involves providing additional instructions during finetuning, such as providing task-specific prompts or data augmentation techniques, which results in improved performance for downstream tasks. With the success of InstructGPT, it is becoming increasingly clear that language models can be further optimized for specific use-cases by adjusting their architecture or fine-tuning process. Thus, it is reasonable to assume that newer and more advanced models, such as text-davinci-003, should generally perform better than their predecessors. Finally, our analysis shows that full-prompt models outperformed minimal prompt ones, with an average 4.06 point higher score, demonstrating the effectiveness of our prompting method.
A Pearson correlation test (excluding the atypical data point with a temperature of 0) showed a positive correlation between temperature and score, r(8) = .7055, p = .022646. Higher temperature values yielded better results, with the highest average scores at temperatures of 0.9 and 0.8.
Based on these findings, we recommend using advanced Transformer-based LMs like OpenAI's GPT-3 "text-davinci-002" at a temperature around 0.9, along with our NPC construction pipeline, for optimal NPC script generation.
## 5.2 Results
Self-Diagnosis: To assess the reliability of the self-diagnosis module, we manually evaluated 66 NPC scripts using the same metrics applied in selfdiagnosis. A Pearson correlation test showed a strong positive correlation between self-diagnosed and human-evaluated scores, r(64) = .8092, p <
.00001. This demonstrates the module's consistency and correlation with human evaluation scores.
Turing Quest Results: Our NPC construction pipeline, when using the recommended parameters, generates dialogue that not only passes as humanwritten but also scores highly on the evaluation metric. On average, our generated dialogue was thought to be hand-written 64.58% of the time with the best performing script passing as human written 75% of the time. The generated NPC scripts exhibit goal-oriented behavior and adherence to the in-game world and genre, maintaining player immersion. The Turing Quest results further confirm the high quality of the generated dialogue.
## 6 Conclusion
We developed a novel pipeline capable of automatically generating NPC scripts comparable or of superior quality to human-written NPC dialogue using Transformer-based PLMs. We then created a self-diagnosis module which provides a method to evaluate and compare the quality of NPC dialogue quantitatively. Finally, our proposal of the Turing Quest allows us to determine the capabilities of a language model when applied to the task of NPC dialogue generation and whether a script passes as human-written. While the NPC construction pipeline allows for modularity even in between responses, that aspect was not explored in depth in this paper. We will explore dialogue generation for dynamic NPCs with evolving roles or attributes in future research.
## Limitations
The dialogue generated for the player exhibits a higher degree of repetition and has a tendency towards looping. This limitation exists as we did not focus on generating player dialogue as that is a different problem of its own. To account for this limitation, both the self-diagnosis and the Turing Quest only evaluate the NPC's dialogue.
Currently, the maximum context window for the dialogue history portion is limited by the max tokens of a given model minus the tokens required for the NPC header. Despite being a rare occurrence, it is possible that the dialogue history becomes so long that the model may not be able to generate any responses as there is no more remaining space.
We did not experience this problem; however, a workaround would be to discard the oldest dialogue history entry as needed. This approach however may cause the NPC to lose out on information that it would otherwise be able to leverage in dialogue.
## Ethics Statement
The presence of bias within NPC models/systems poses a significant risk particularly as the demographic of young individuals, still in the age of development, who enjoy playing video games continues to expand. In 2006, 92% of children in the ages of 2-17 had played video games (Dogan ˘ ,
2006). 97% of players under the age of of 18 play more that an hour of games daily (Granic et al., 2014). According to recent statistics, the global demographic of active video game players is projected to increase over 5% year-over-year (Dogan ˘ ,
2006), reaching over 3 billion active players worldwide in 20232. This means, in the future, video games will reach more young children and adolescents. If the presence of bias is not addressed, it could subconsciously normalize problematic behaviours seen in games in children as humans are a product of both nature and nurture (Plomin and Asbury, 2005). This in turn may lead to more biases being overlooked or ignored by the next generation of researchers, creating a vicious cycle.
## Acknowledgements
This work was supported by the Natural Sciences and Engineering Research Council of Canada and by the New Frontiers in Research Fund.
## References
Alaa A Abd-Alrazaq, Mohannad Alajlani, Ali Abdallah Alalwan, Bridgette M Bewick, Peter Gardner, and Mowafa Househ. 2019. An overview of the features of chatbots in mental health: A scoping review. *International Journal of Medical Informatics*,
132:103978.
Bethesda Game Studios. 2011. The elder scrolls v:
Skyrim.
Bethesda Game Studios. 2015. Fallout 4.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Hayco de Haan, Joop Snijder, Christof van Nimwegen, and Robbert Jan Beun. 2018. Chatbot personality and customer satisfaction. *Info Support Research*.
C Dennett Daniel. 1981. Conditions of personhood.
The Identities of Persons, 175.
John M Digman. 1990. Personality structure: Emergence of the five-factor model. *Annual review of* psychology, 41(1):417–440.
Filiz Öztütüncü Dogan. 2006. Video games and chil- ˘
dren: violence in video games. In *New/Yeni Symposium Journal*, volume 44, pages 161–164.
Margherita Gambini, Tiziano Fagni, Fabrizio Falchi, and Maurizio Tesconi. 2022. On pushing deepfake tweet detection capabilities to the limits. In *14th* ACM Web Science Conference 2022, WebSci '22, page 154–163, New York, NY, USA. Association for Computing Machinery.
Isabela Granic, Adam Lobel, and Rutger CME Engels.
2014. The benefits of playing video games. *American psychologist*, 69(1):66.
Carl Håkansson and Johan Fröberg. 2021. Application of machine learning to construct advanced npc behaviors in unity 3d.
Robin Hunicke, Marc Leblanc, and Robert Zubek. 2004.
Mda: A formal approach to game design and game research. *AAAI Workshop - Technical Report*, 1.
A Kalbiyev. 2022. Affective dialogue generation for video games. Master's thesis, University of Twente.
Christopher Kerr and Duane Szafron. 2009. Supporting dialogue generation for story-based games. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 5, pages 154–160.
Michael Sangyeob Lee and Carrie Heeter. 2015. Cognitive intervention and reconciliation: Npc believability in single-player rpgs. *International Journal of RolePlaying*, 5:47–65.
Hector Levesque, Ernest Davis, and Leora Morgenstern.
2011. The winograd schema challenge. In AAAI
Spring Symposium: Logical Formalizations of Commonsense Reasoning.
Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep reinforcement learning for dialogue generation. *arXiv preprint* arXiv:1606.01541.
Piji Li. 2020. An empirical investigation of pre-trained transformer language models for open-domain dialogue generation. *CoRR*, abs/2003.04195.
Jianfeng Liu, Feiyang Pan, and Ling Luo. 2020. Gochat:
Goal-oriented chatbots with hierarchical reinforcement learning. In *Proceedings of the 43rd International ACM SIGIR Conference on Research and* Development in Information Retrieval, SIGIR '20, page 1793–1796, New York, NY, USA. Association for Computing Machinery.
Muneer M Alshater. 2022. Exploring the role of artificial intelligence in enhancing academic performance:
A case study of chatgpt. *Available at SSRN*.
Madison Milne-Ives, Caroline de Cock, Ernest Lim, Melissa Harper Shehadeh, Nick de Pennington, Guy Mole, Eduardo Normando, and Edward Meinert.
2020. The effectiveness of artificial intelligence conversational agents in health care: Systematic review.
J Med Internet Res, 22(10):e20346.
György Molnár and Zoltán Szüts. 2018. The role of chatbots in formal education. In *2018 IEEE 16th* International Symposium on Intelligent Systems and Informatics (SISY), pages 000197–000202. IEEE.
Alexander Nareyek. 2004. Ai in computer games:
Smarter games are making for a better user experience. what does the future hold? *Queue*, 1(10):58–
65.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *Advances in Neural* Information Processing Systems, 35:27730–27744.
Dijana Peras. 2018. Chatbot evaluation metrics. *Economic and Social Development: Book of Proceedings*,
pages 89–97.
Robert Plomin and Kathryn Asbury. 2005. Nature and nurture: Genetic and environmental influences on behavior. The Annals of the American Academy of Political and Social Science, 600(1):86–98.
Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2017. Assigning personality/identity to a chatting machine for coherent conversation generation. *CoRR*, abs/1706.02861.
Maya Raifer, Guy Rotman, Reut Apel, Moshe Tennenholtz, and Roi Reichart. 2022. Designing an automatic agent for repeated language–based persuasion games. *Transactions of the Association for Computational Linguistics*, 10:307–324.
James Ryan, Michael Mateas, and Noah Wardrip-Fruin.
2016. Characters who speak their minds: Dialogue generation in talk of the town. In Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference.
Samuel Taylor Coleridge. 1971. Biographia Literaria, 1817.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H.
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization.
Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021.
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP. Transactions of the Association for Computational Linguistics, 9:1408–1424.
Björn Schlünder and Ralf Klabunde. 2013. Greetings generation in video role playing games. In *Proceedings of the 14th European Workshop on Natural Language Generation*, pages 167–171, Sofia, Bulgaria.
Association for Computational Linguistics.
J Schulman, B Zoph, C Kim, J Hilton, J Menick, J Weng, JFC Uribe, L Fedus, L Metz, M Pokorny, et al. 2022.
Chatgpt: Optimizing language models for dialogue.
Tuva Lunde Smestad and Frode Volden. 2019. Chatbot personalities matters. In International conference on internet science, pages 170–181. Springer.
M. Onat Topal, Anil Bas, and Imke van Heerden. 2021.
Exploring transformers in natural language generation: Gpt, bert, and xlnet. *CoRR*, abs/2102.08036.
A. M. Turing. 1950. I.—COMPUTING MACHINERY
AND INTELLIGENCE. *Mind*, LIX(236):433–460.
Henrik Warpefelt. 2016. *The Non-Player Character:*
Exploring the believability of NPC presentation and behavior. Ph.D. thesis, Stockholm University.
Terry Winograd. 1972. Understanding natural language.
Cognitive Psychology, 3(1):1–191.
Anbang Xu, Zhe Liu, Yufan Guo, Vibha Sinha, and Rama Akkiraju. 2017. A new chatbot for customer service on social media. In Proceedings of the 2017 CHI conference on human factors in computing systems, pages 3506–3510.
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021.
Crossfit: A few-shot learning challenge for crosstask generalization in nlp.
Yicheng Zou, Lujun Zhao, Yangyang Kang, Jun Lin, Minlong Peng, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, and Xiaozhong Liu.
2021. Topic-oriented spoken dialogue summarization for customer service with saliency-aware topic modeling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14665–
14673. |
zheng-etal-2023-making | Making the Most Out of the Limited Context Length: Predictive Power Varies with Clinical Note Type and Note Section | https://aclanthology.org/2023.acl-srw.18 | Recent advances in large language models have led to renewed interest in natural language processing in healthcare using the free text of clinical notes. One distinguishing characteristic of clinical notes is their long time span over multiple long documents. The unique structure of clinical notes creates a new design choice: when the context length for a language model predictor is limited, which part of clinical notes should we choose as the input? Existing studies either choose the inputs with domain knowledge or simply truncate them. We propose a framework to analyze the sections with high predictive power. Using MIMIC-III, we show that: 1) predictive power distribution is different between nursing notes and discharge notes and 2) combining different types of notes could improve performance when the context length is large. Our findings suggest that a carefully selected sampling function could enable more efficient information extraction from clinical notes. | # Making The Most Out Of The Limited Context Length: Predictive Power Varies With Clinical Note Type And Note Section
Hongyi Zheng1 Yixin Tracy Zhu1 Lavender Yao Jiang**1, 2**
Kyunghyun Cho1 Eric Karl Oermann**1, 2**
NYU Center for Data Science1 NYU Langone Health2
{hz2212, yz5880, lyj2002, kyunghyun.cho}@nyu.edu, [email protected]
## Abstract
Recent advances in large language models have led to renewed interest in natural language processing in healthcare using the free text of clinical notes. One distinguishing characteristic of clinical notes is their long time span over multiple long documents. The unique structure of clinical notes creates a new design choice:
when the context length for a language model predictor is limited, which part of clinical notes should we choose as the input? Existing studies either choose the inputs with domain knowledge or simply truncate them. We propose a framework to analyze the sections with high predictive power. Using MIMIC-III, we show that: 1) predictive power distribution is different between nursing notes and discharge notes and 2) combining different types of notes could improve performance when the context length is large. Our findings suggest that a carefully selected sampling function could enable more efficient information extraction from clinical notes.
## 1 Introduction
Electronic Health Records (EHR) enable the development of language model based clinical predictor, which takes in clinical notes to predict patient outcomes. Clinical notes in EHR exhibit two unique characteristics. 1) Clinical notes cover a long time span (from a few weeks to over a year), which results in their sparsity of information-rich sections.
2) Clinical notes also tend to be long: many discharge notes could take up to 10, 000 tokens, which makes using the entire note as model input computationally expensive. 3) The strong noise level in the medical notes (usually due to the domainspecific abbreviations and typos) also poses a challenge to extract information effectively.
These distinguishing characteristics of clinical notes lead to a new design choice: when the context length is limited due to the constrained compute or model architecture, what parts of clinical notes should we sample to maximize the model's performance? We propose a framework to subsample text sections with high predictive power.
Empirically, we explore the distribution of predictive power over clinical note types and sections by searching over these variables. We found that 1) the predictive power distribution is different between nursing notes and discharge notes: the predictive power is stronger at the beginning and end of discharge notes, while uniform within nursing notes. 2) The effect of combining sections from different types of notes improves the performance when the context size is large, but harms the performance when the context size is small. More details of task formulation can be found at section 3. Our code is publicly available on GitHub1.
## 2 Related Work
Existing methods for subsampling clinical notes for the BERT-based model are mostly based on domain knowledge. For instance, Yang et al. (2022)
and Darabi et al. (2020) choose discharge notes as they summarize patients' visits. Thapa et al. (2022) chooses the notes within three days before a cutoff time in consideration of timeliness. While these assumptions are based on domain knowledge, they require human input and may not generalize. Thus, we are interested in exploring a data-driven sampling choice without assumptions of expert inputs.
Another related, but orthogonal approach to the limited context length problem is note aggregation.
Instead of subsampling notes, Huang et al. (2019)
propose to feed everything to the model, one maximum context length at a time, and aggregate the outputs for the final prediction. In their work, notes of one patient are split into a partition of subsequences, and the patient's re-admission risk is obtained by taking a weighted average of probabilities computed from each subsequence. This method's 1https://github.com/nyuolab/
EfficientTransformer compute cost scales with the aggregated sequence length, which can be expensive for records with long clinical notes. In contrast, our method aims to find one single information-rich segment as input.
## 3 Method
We formalize our prediction task as follows: given a set of clinical notes x associated with an admission record, we want to predict the class label y which is our patient outcome of interest. Ideally, we want to train a classifier fw∗ to approximate p(y | x). The optimal parameter is
$$w^{*}=\arg\operatorname*{max}_{w}\ m(f_{w}(x),y),$$
where m is a metric function of interest. Nevertheless, due to the computational constraint, we need to reduce the input size via a sampling function sθ so that sθ(x) fits the input length limit and preserves information. Empirically, the optimal parameters are
$$w^{*},\theta^{*}=\arg\operatorname*{max}_{w,\theta}\ m(f_{w}(s_{\theta}(x),y)).$$
We say a sample function sθ has a higher predictive power if m(fw∗ (sθ(x), y)) is larger.
While current works chose sθ based on prior medical knowledge or simply fix it as a truncation function, we propose to explore different sampling functions sθ to make the most out of the limited context length with the highest predictive power.
Notice that in our work, s and θ are searched manually, instead of using learning algorithms.
## 4 Experimental Setup
We hypothesize that for 30-day all-cause readmission prediction, there exists an alternative sampling function that enables similar or better performance than the commonly used "truncated discharge notes". More formally, we focus on a parameterized sampling function with 2 variables: 1)
which section of tokens to include, 2) what type(s)
of clinical notes to use.
Model We finetuned two clinical language models in our experiments. The first is Clinical-BERT
(Alsentzer et al., 2019), which continued to pretrain BERT using approximately 2 million notes from MIMIC-III and has a maximum sequence length of 512. The second is the ClinicalLongformer (Li et al., 2022), which continued to pretrain Longformer (Beltagy et al., 2020) with MIMIC-III notes and enables input of up to 4096 tokens. Both models are finetuned to predict the probability of 30-day all-cause readmission: that is, whether the patient will be re-admitted to the hospital within 30 days of their discharge dates.
Dataset We use the discharge notes and nursing notes in the noteevent table of the MIMIC-III
database (Johnson et al., 2016). There are 40, 000 de-identified admission records available to use after filtering out all admission records without nursing notes and discharge notes. The admission records are split into 75% train, 12.5% validation, and 12.5% test sets. Other types of medical notes such as physician notes are excluded from consideration in our experiments due to their scarcity in the database. See Appendix A for data preprocessing.
Sliding Window To extract different sections of the clinical notes, we use a sliding window technique. Let n be the window's width. Let l be the total number of tokens of the text. The window is placed based on an input parameter p ∈ [0, 1] indicating the location of the midpoint of the window, where the window interval is
$$[l p-n/2,l p+n/2].$$
In case where lp−n/2 < 0, we shifted the window backward so that the front of the window aligns with the beginning of the input tokens. In the case where lp+n/2 > l we shifted the window forward to let the back of the window match the end of the tokens. Also, when *l < n*, we ignore the input p and pad the tokens to maximum input length n.
We try 11 different values of p (0.0, 0.1, *· · ·* 1.0)
for ClinicalBERT and 2 values of p (0.0 and 1.0)
for ClincialLongformer along with an additional fragmented window trial p = both which looks into the first n/2 and last n/2 tokens of the input text. Similarly, when *l < n*, we simply pad the sequence to the window's length.
Mixing Notes To control different types of clinical notes, we experimented with the following options: 1) first nursing note, 2) last nursing note, 3) discharge note, 4) first nursing notes + discharge note, 5) last nursing notes + discharge notes. For options with two types of notes, n/2 tokens are allocated to each type, and three values for p1 and p2 each (0.0, 1.0 and both) are used to select n/2 tokens from each type of note, resulting in 9 possible input parameter combinations.
![2_image_0.png](2_image_0.png)
## 5 Results 5.1 Different Sections In Nursing Notes And Discharge Notes
We finetune ClinicalBERT and ClinicalLongformer on different sections of nursing and discharge notes.
We used sliding windows to extract a sequence of tokens that meets the model's maximum sequence length. We have three key observations.
## Different Types Of Clinical Notes Show Disparate Predictive Power Distributions Over
Text Sections. As shown in Figure 1, the discharge notes (blue line) show quite uneven predictive power distribution, where the beginning (p = 0.0) and end (p = 1.0) sections of the text provide strong predictive power while the middle sector
(0.2 ≤ p ≤ 0.5) shows a significant dip in predictive power. In contrast, the predictive power of the nursing notes (orange and green line) turns out to be uniformly distributed: using different sections of the nursing notes (0.0 ≤ p ≤ 1.0) does not make a significant difference. We speculate that this discrepancy may stem from the domain knowledge that discharge notes are more structured than nursing notes: they often start with basic descriptions of the patient information and ends with suggestions for the patients, whereas nursing notes often have multiple types of information mixed together throughout the text.
Nursing Notes Provide Modest Predictive Power.
Nursing notes produce decent re-admission prediction results: according to Figure 1 and Figure 2, although their predictive power is not as strong as discharge notes (which are typically written right before patients leave the hospital), they consistently achieve AUC ROC scores of over 0.7 which indicates modest predictability (Schneeweiss et al.,
2001). Moreover, the first nursing notes (orange line in Figure 1, second group of bars in Figure 2) of each admission provide similar predictive power as compared to the last nursing notes (green line in Figure 1, third group of bars in Figure 2), indicating the possibility of re-admission risk evaluation at the early stage of the admission. This finding is especially valuable from the perspective of intervention, as it is more practical to decide whether the patient should be discharged at the time before the discharge note is written. Also, the abundance of nursing notes makes them a suitable alternative for re-admission risk evaluation tasks when discharge notes are unavailable.
![2_image_1.png](2_image_1.png)
![3_image_0.png](3_image_0.png)
Preserving the Beginning Tokens Is Not the Only Option. It is generally assumed that when the available input tokens are limited, the leading tokens of each clinical note should be used. Nevertheless, our experiments show that for discharge notes, spending half of the available tokens on the beginning section and spending the remaining half on the end section (p = both) achieves slightly better performance (AUC ROC of 0.849 versus 0.845 for ClinicalBERT, 0.869 versus 0.864 for ClinicalLongformer) as compared to using the leading token only (p = 0.0). We speculate that this helps as it avoids the weakly predictive middle sector of the clinical notes.
## 5.2 Combining Sections From Different Types
We combine text sections from two different types of clinical notes and finetune ClinicalBERT
and ClinicalLongformer. This experiment helps us investigate the question: when the amount of available tokens is fixed, does combining information from different clinical notes work better than using discharge notes only? Since discharge notes are shown to provide strong predictive power in our prior experiments, we only investigate the note type combinations that include discharge notes (first nursing + discharge, last nursing + discharge).
## The Effect Of Allocating Tokens To Different
Types of Clinical Notes Depends on the Context Size. When the context size is relatively large
(ClinicalLongformer, as shown in the right side of figure 3), allocating the available tokens to different types of clinical notes (blue, orange, and green bars) leads to improvements in performance. The baseline (dashed red line) uses discharge notes only and has a lower AUC ROC (0.013 to 0.019) than models finetuned with combined notes. However, when the context is small (Clinical BERT, as shown in the left side of figure 3), distributing the already limited number of tokens to different clinical notes hurts the performance: the AUC ROC of ClinicalBERT finetuned with mixed notes falls below the baseline performance by −0.009 to −0.001. We speculate that this may be related to the uneven predictive power distribution in discharge notes:
if there are already a sufficient number of tokens covering the most informative sections of the discharge notes, the rest of the discharge notes might not be as informative as the prior nursing notes.
## 6 Discussion And Future Works
Our findings suggest that when the input size is constrained, a carefully selected sampling function that chooses the text with high predictive power could benefit model performance. Specifically on the task of readmission prediction from MIMIC-III
notes, we show that the predictive power varies across note types and note sections. This insight enables more efficient information extraction from long and noisy clinical notes, which is beneficial when the computing resource is limited and the context length needs to be controlled.
Our findings call for two future directions. First, the performance disparities between ClinicalBERT
and ClinicalLongformer (subsection 5.2) indicate that the best strategy to allocate the input context is related to the maximum sequence length, and more work should be done to determine their exact relationship. Another direction is investigating the predictive power pattern based on the authorship of the clinical note. We showed (subsection 5.1) that discharge notes (written by doctors) have a more uneven predictive power pattern as compared to nursing notes (written by nurses). How the domain knowledge of the author would affect the clinical note quality is worth investigating.
## Limitations
We acknowledge three limitations in our experiments. First, in our second experiment, we fixed the window size for each type of note to be n/2.
A more comprehensive investigation could also search for the optimal window size for each note type. Second, although we explored one fragmented window configuration p = both, we did not explore other fragmented window configurations due to resource constraints. Lastly, we did not investigate more types of clinical notes (e.g., physician notes and ECG notes) because MIMIC-III has limited examples for other note types. We expect it to be resolved in future works with MIMIC-IV's publication (Johnson et al., 2023).
## References
Emily Alsentzer, John R. Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew B. A. McDermott. 2019. Publicly available clinical bert embeddings.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020.
Longformer: The long-document transformer. *CoRR*,
abs/2004.05150.
Sajad Darabi, Mohammad Kachuee, Shayan Fazeli, and Majid Sarrafzadeh. 2020. Taper: Time-aware patient ehr representation. *IEEE Journal of Biomedical and* Health Informatics, 24(11):3268–3275.
Kexin Huang, Jaan Altosaar, and Rajesh Ranganath.
2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. *arXiv preprint* arXiv:1904.05342.
Alistair Johnson, Lucas Bulgarelli, Lu Shen, Alvin Gayles, Ayad Shammout, Steven Horng, Tom Pollard, Sicheng Hao, Benjamin Moody, Brian Gow, Li-wei Lehman, Leo Celi, and Roger Mark. 2023. Mimic-iv, a freely accessible electronic health record dataset.
Scientific Data, 10:1.
Alistair E.W. Johnson, Tom J. Pollard, Lu Shen, Li wei H. Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G. Mark. 2016. Mimic-iii, a freely accessible critical care database. *Nature*.
Yikuan Li, Ramsey M. Wehbe, Faraz S. Ahmad, Hanyin Wang, and Yuan Luo. 2022. Clinical-longformer and clinical-bigbird: Transformers for long clinical sequences. *CoRR*, abs/2201.11838.
Sebastian Schneeweiss, John D Seeger, Malcolm Maclure, Philip S Wang, Jerry Avorn, and Robert J
Glynn. 2001. Performance of comorbidity scores to control for confounding in epidemiologic studies using claims data. *American journal of epidemiology*,
154(9):854–864.
Nischay Bikram Thapa, Sattar Seifollahi, and Sona Taheri. 2022. Hospital readmission prediction using clinical admission notes. In Australasian Computer Science Week 2022, pages 193–199.
Grace Yang, Ming Cao, Lavender Y Jiang, Xujin C
Liu, Alexander Cheung, Hannah Weiss, Davied Kurland, Kyunghyun Cho, and Eric K Oermann. 2022.
Language model classifier aligns better with physician word sensitivity than xgboost on readmission prediction. *arXiv preprint arXiv:2211.07047*.
## Appendices A Preprocessing
We preprocessed the dataset with the following approach: First of all, admission records with missing discharge notes or missing nursing notes are eliminated. Then, for each remaining admission record, the nursing notes associated with that record are sorted according to their timestamp. The first and last created nursing notes for each admission are selected and concatenated with the discharge notes of the same admission record to produce the clinical note set for every admission. Lastly, we clean the datasets by removing the de-identification patterns
('[** de-identified info **]') in the clinical notes, which usually occupy a lot of tokens. |
yang-etal-2023-intriguing | Intriguing Effect of the Correlation Prior on {ICD}-9 Code Assignment | https://aclanthology.org/2023.acl-srw.19 | The Ninth Revision of the International Classification of Diseases (ICD-9) is a standardized coding system used to classify health conditions. It is used for billing, tracking individual patient conditions, and for epidemiology. The highly detailed and technical nature of the codes and their associated medical conditions make it difficult for humans to accurately record them. Researchers have explored the use of neural networks, particularly language models, for automated ICD-9 code assignment. However, the imbalanced distribution of ICD-9 codes leads to poor performance. One solution is to use domain knowledge to incorporate a useful prior. This paper evaluates the usefulness of the correlation bias: we hypothesize that correlations between ICD-9 codes and other medical codes could help improve language models{'} performance. We showed that while the correlation bias worsens the overall performance, the effect on individual class can be negative or positive. Performance on classes that are more imbalanced and less correlated with other codes is more sensitive to incorporating the correlation bias. This suggests that while the correlation bias has potential to improve ICD-9 code assignment in certain cases, the applicability criteria need to be more carefully studied. | # Intriguing Effect Of The Correlation Prior On Icd-9 Code Assignment
Zihao Yang1,2, Chenkang Zhang1,2, Muru Wu1,2**, Xujin Chris Liu**2,3, Lavender Yao Jiang1,2, Kyunghyun Cho1,4,5,6**, Eric Karl Oermann**2,7,8,1 1Center for Data Science, New York University 2Department of Neurosurgery, NYU Langone Health 3Department of Electrical and Computer Engineering, NYU Tandon School of Engineering 4Courant Institute of Mathematical Sciences, New York University 5Canadian Institute for Advanced Research 6Prescient Design 7Department of Radiology, NYU Langone Health 8Neuroscience Institute, NYU Langone Health
{gavin.yang,stephen.zhang,wm1077,chris.liu,lyj2002,kyunghyun.cho}@nyu.edu, [email protected]
## Abstract
The Ninth Revision of the International Classification of Diseases (ICD-9) is a standardized coding system used to classify health conditions. It is used for billing, tracking individual patient conditions, and for epidemiology.
The highly detailed and technical nature of the codes and their associated medical conditions make it difficult for humans to accurately record them. Researchers have explored the use of neural networks, particularly language models, for automated ICD-9 code assignment.
However, the imbalanced distribution of ICD-9 codes leads to poor performance. One solution is to use domain knowledge to incorporate a useful prior. This paper evaluates the usefulness of the correlation bias: we hypothesize that correlations between ICD-9 codes and other medical codes could help improve language models' performance. We showed that while the correlation bias worsens the overall performance, the effect on individual class can be negative or positive.1 Performance on classes that are more imbalanced and less correlated with other codes is more sensitive to incorporating the correlation bias. This suggests that while the correlation bias has potential to improve ICD-9 code assignment in certain cases, the applicability criteria need to be more carefully studied.
## 1 Introduction
Electronic Health Records (EHRs) contain patient information in the form of clinical notes, structured data tables, and biomedical imaging and time 1The implementation code is available on github: https:
//github.com/nyuolab/text2table series. For easy tracking and analysis of health data across different healthcare systems, and critically for billing purposes, hospitals and insurance companies assign codes of a standardized coding system to characterize the clinical conditions of patients. Wrong code assignments may result in billing issues that increase patients' expenses substantially, misdiagnosis, and poor tracking of population level health conditions nationally. The Ninth Revision of the International Classification of Diseases (ICD-9) is a system used worldwide to classify and code diseases, injuries, and other health conditions. There were extensive efforts studying the automated assignment of ICD-9 codes to health records and relevant documents (Yan et al., 2022).
With recent developments in NLP, there has been a focus on the use of neural networks (Yu et al., 2019; Mullenbach et al., 2018; Teng et al.,
2020). One particularly recent direction is in the use of language models. Originally introduced in BERT (Devlin et al., 2019), the recipe of pretraining and finetuning of language models has shown promising performance in many tasks. Researchers have applied BERT for assigning ICD-9 codes from medical documents (Huang et al., 2022; Pascual et al., 2021; Zhang et al., 2020). However, BERT and other encoder-based language models perform poorly on ICD-9 code assignment (Yan et al., 2022).
One challenge is the extremely imbalanced distribution of ICD-9 codes. Following the distribution of medical conditions in the real world, some codes occur frequently while other codes may appear only once (Yan et al., 2022). It is difficult for models 109 to correctly predict minority codes because few samples exist in the dataset (Sun et al., 2009). A
proposed solution is to incorporate domain knowledge that provides useful priors for the minority codes (Bai and Vucetic, 2019; Wang et al., 2020; Zeng et al., 2019).
We hypothesize that one useful prior for ICD-9 code assignment is the correlation between ICD9 codes and other relevant coding systems. We term other relevant coding systems auxiliary tasks because language models in our experiments predict codes from these systems in addition to ICD9 codes. The auxiliary tasks are Current Procedural Terminology (CPT) codes and DiagnosisRelated Group (DRG) codes. This correlation prior stems from the domain knowledge that labels from other coding systems give information about ICD-9 codes. For example, patients who underwent artery bypass surgeries (CPT code 33533) are likely to have heart failures (ICD-9 code 428.0). To test our hypothesis, we investigate the effect of multitasking on correlated auxiliary tasks and encouraging similar label correlations between training labels and model predictions through regularization. We showed that 1) on average, utilizing correlations hurts language models' performance on predicting ICD-9 codes from discharge summaries, 2) for each ICD-9 code, utilizing correlations might hurt or help, 3) ICD-9 codes that are more imbalanced and less correlated with auxiliary tasks experience larger performance changes (both positive and negative) from incorporating the correlation prior. Our findings suggest that the correlation prior has the potential to improve predictions of certain ICD-9 codes, but this method suffers from instability when the main task has an imbalanced label distribution and a weak correlation with auxiliary tasks.
## 2 Related Work
Domain knowledge One useful prior for ICD9 codes is its hierarchical structure. For example, a high-level code (e.g., 428.0 heart failure) encompasses its corresponding low-level codes (e.g.,
428.1 left heart failure, 428.2 systolic heart failure). Tsai et al. (2019) incorporated this hierarchical prior and improved models' performance on predicting imbalanced ICD-9 codes.
CorrLoss CorrLoss is a regularization technique
(Rieger et al., 2022) that encourages consistent label correlations between ground truth and predictions. Rieger et al. (2022) uses CorrLoss on the facial affect recognition task to integrate the correlation priors for facial movements. Corrloss can be used in any domain where correlation between prediction targets provides a useful signal. Thus, we adopt Corrloss to integrate information of the correlations between different kinds of diagnosis and procedure codes.
## 3 Methods
Task overview We formulate the task of code assignment into a multilabel text classification task because each patient has multiple codes corresponding to their discharge summaries. Each binary label in the task corresponds to a specific code.
Formally, our classifier aims to approximate the probability p(y1*, . . . , y*n|x), where each yiis an ICD-9 code and x is a discharge summary.
The Correlation Prior We hypothesize that correlations between ICD-9 and other coding systems are a useful prior for ICD-9 code assignment and choose to incorporate the prior in two ways.
First, we added the auxiliary tasks of predicting other medical codes (e.g., CPT). Formally, we train a classifier to approximate
## P(Y, Z|X) = P(Y|X) P(Z|X, Y), (1)
where y is a sequence of ICD-9 codes (the main task), z is a sequence of other medical codes (the auxiliary task), and x is a discharge summary. Our domain knowledge assumes that the absolute correlation abs(ρ(*y, z*)|x) > 0, so *y, z* are not conditionally independent given x and p(z|*x, y*) ̸= p(z|x).
This is desirable because otherwise, we are strictly increasing the difficulty of the task from learning p(y|x) to learning p(y|x) p(z|x).
There are benefits and concerns associated with Equation 1, and their trade-off is unclear *a priori*.
One benefit is that extra dependency information from p(z|*x, y*) could potentially simplify learning p(*y, z*|x). One drawback is that the additional prediction targets z could worsen the curse of dimensionality. Whether the benefit would outweigh the drawback is difficult to determine without running a controlled experiment.
Second, we used CorrLoss to encourage similar label correlation patterns between training and predictions. Formally, we added a regularization term c =Pi̸=j c(di, dj ). Each summation term scales with a correlation difference:
$$c(d_{i},d_{j})\propto|\rho(d_{i},d_{j})_{y_{\mathrm{train}}}-\rho(d_{i},d_{j})_{\hat{y}}|,\quad(2)$$
ClinicalBERT original **0.4528** 0.397 0.3939 0.408
CorrLoss 0.4037 0.3594 0.3272 0.363
RoBERTa original **0.4421** 0.4009 0.3884 0.4116
CorrLoss 0.3736 0.3236 0.2816 0.3692
Longformer original **0.4712** 0.4227 0.3886 0.4219
CorrLoss 0.4139 0.335 0.212 0.3549
PROC PROC+CPT PROC+DRG PROC+DIAG
where di, dj are different classes, ρ(di, dj )v is the correlation between class di and dj in a vector v, ytrain is the training labels, yˆ is the predicted labels, and ρ is the Pearson correlation function.
Dataset We built two datasets from the Medical Information Mart for Intensive Care III (MIMICIII) (Johnson et al., 2016), a database of EHRs. Our first dataset, subsequently referred to as "MIMICIII", contains examples of each patient's discharge summary, and associated diagnosis and procedure codes (diagnosis ICD-9, procedure ICD-9, CPT,
and DRG). Because this dataset is extremely imbalanced, we further select the top 50 most frequently used codes for each kind of coding system to construct a second dataset that represents a more ideal scenario. Following the convention of related literature, we call this dataset "MIMIC-III-50" (Vu et al.,
2020; Luo et al., 2021; Li and Yu, 2020). Statistics of the MIMIC-III dataset are in Appendix A.
Models and Evaluation We use ClinicalBERT
(Alsentzer et al., 2019), RoBERTa (Liu et al., 2019),
Longformer (Beltagy et al., 2020) (justification in Appendix C). We use the macro F1 as our metric for comparison because this metric treats all classes equally, which means minority codes are as important as majority codes in evaluation (Branco et al.,
2016; Sun et al., 2009; Ferri et al., 2009). Because it is an imbalanced classification, the default threshold of 0.5 is not suitable (Zhou and Liu, 2006; Zou et al., 2016). Instead, we tune the threshold according to the precision-recall curve to maximize the F1 score for each individual label.
## 4 Experiments
To test whether the correlation prior is useful for ICD code assignment, we incorporate multitasking (Equation 1) and CorrLoss (Equation 2) into our model and check if they improve performance.
Specifically, we studied two main tasks (diagnosis ICD-9 codes and procedure ICD-9 codes). For each main task, we added one of the three auxiliary tasks: DRG codes, CPT codes, and the other ICD-9 codes (for diagnosis ICD-9 code, the auxiliary task can be procedure ICD-9 code, and vice versa). We trained both main-task-only models and multitasking models with and without CorrLoss.
## 5 Results
Multitasking and CorrLoss hurt performance on MIMIC-III-50 and do not significantly impact performance on MIMIC-III. Table 1 shows the macro-F1 score on procedure ICD-9 of the MIMIC-III-50 dataset. We observe two patterns for each language model. First, adding auxiliary tasks always decreases the performance of models in comparison to predicting main tasks only. Second, regularizing with CorrLoss always decreases the performance of models in comparison to not using CorrLoss. The same pattern exists for predicting diagnosis ICD-9 of the MIMIC-III-50 dataset (Appendix Table 6). However, on the full MIMIC-III
dataset, multitasking and CorrLoss do not impact models' performance significantly (Appendix B).
## 6 Analysis
Since the macro F1 score does not show significant changes from multitasking and CorrLoss on the full MIMIC-III dataset, we investigate whether the performance changes for individual labels. Specifically, we analyzed how label imbalance (measured by Shannon entropy, defined in Appendix D.1) and label correlation (measured by the average absolute Pearson correlation coefficient between each main task label and all auxiliary task labels, as defined in Appendix D.1) affect the model's performance.
For individual ICD-9 code, incorporating the correlation prior may hurt or help. Figure 1 shows that there exist labels with both negative and positive performance changes.
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
Labels that are more imbalanced and less correlated to auxiliary labels experience larger changes. Figure 1 shows two relationships: (1)
more balanced labels (closer to the right) have less performance changes (spread of dots on the y axis),
(2) labels that are more correlated with the auxiliary task (darker dots) have less performance changes
(spread along the y axis). All the other plots of different tasks and setups show similar patterns
(Appendix D.1).
ClinicalBERT +CPT 0.333 0.273
+DRG 0.28 0.413
+DIAG 0.3 0.387
RoBERTa +CPT 0.4 0.3
+DRG 0.393 0.353
+DIAG 0.313 0.287
Longformer +CPT 0.34 0.427
+DRG 0.34 0.28
+DIAG 0.347 0.307
In both extreme scenarios (imbalanced label, small correlation with auxiliary labels) and ideal scenarios (balanced labels, high correlation with auxiliary labels), **incorporating correlation is**
more likely to hurt than help. Table 2 shows that for the top 50 most balanced labels and the bottom 50 least balanced labels, if we utilize correlations
| top50 | bottom50 |
|---------|------------|
ClinicalBERT +CPT 0.333 0.327
+DRG 0.32 0.327 +DIAG 0.293 0.247
RoBERTa +CPT 0.487 0.333
+DRG 0.373 0.387
+DIAG 0.267 0.293
Longformer +CPT 0.433 0.327
+DRG 0.28 0.273
+DIAG 0.333 0.24
top50 bottom50
(with multitasking and CorrLoss), the percentage of positive F1 score changes is always less than 50%. Table 3 shows that for the top 50 labels that are most correlated with the auxiliary tasks and the bottom 50 labels that are least correlated with the auxiliary tasks, utilizing correlations also leads to
< 50% positive F1 score change.
## 7 Discussion
Since multitasking and CorrLoss worsen language models' overall performance, it contradicts our hypothesis that the correlations between ICD-9 codes and other medical codes would be a useful prior.
Nevertheless, the performance changes on individual labels are more nuanced and show potential for improving prediction of certain ICD-9 codes. We wonder what characterizes the labels that benefit from incorporating the correlation prior (dots with positive changes in Figure 1). Perhaps for those labels, the additional dependency information gained from the auxiliary tasks outweigh the increased learning complexity from a larger output space. A
prerequisite for a rigorous investigation would be quantifying the trade-off between the dependency information and the learning complexity.
We recognize three limitations that may influence the interpretation of our results and call for future works. First, we did not conduct a hyperparameter search for the regularization strength of CorrLoss. Second, since F1 score decreases are substantial and universal across all experiments on MIMIC-III-50, we did not run experiments multiple times with different seeds. Third, we did not provide a rigorous explanation of what caused our empirical findings. Future works can investigate the plausible hypothesis that the trade-off between the dependency information and the learning complexity causes these findings. Besides these limitations, future works can also investigate more scenarios and methods of incorporating the correlation prior.
## References
Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly Available Clinical BERT Embeddings. In *Proceedings of the 2nd* Clinical Natural Language Processing Workshop, pages 72–78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Tian Bai and Slobodan Vucetic. 2019. Improving Medical Code Prediction from Clinical Text via Incorporating Online Knowledge Sources. In The World Wide Web Conference, WWW '19, pages 72–82, New York, NY, USA. Association for Computing Machinery.
Iz Beltagy, Matthew E. Peters, and Arman Cohan.
2020. Longformer: The Long-Document Transformer. ArXiv:2004.05150 [cs].
Paula Branco, Luís Torgo, and Rita P. Ribeiro. 2016. A
Survey of Predictive Modeling on Imbalanced Domains. *ACM Computing Surveys*, 49(2):31:1–31:50.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages
4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
C. Ferri, J. Hernández-Orallo, and R. Modroiu. 2009.
An experimental comparison of performance measures for classification. *Pattern Recognition Letters*,
30(1):27–38.
Chao-Wei Huang, Shang-Chi Tsai, and Yun-Nung Chen.
2022. PLM-ICD: Automatic ICD Coding with Pretrained Language Models. In *Proceedings of the* 4th Clinical Natural Language Processing Workshop, pages 10–20, Seattle, WA. Association for Computational Linguistics.
Alistair E. W. Johnson, Tom J. Pollard, Lu Shen, Liwei H. Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G. Mark. 2016. MIMIC-III,
a freely accessible critical care database. Scientific Data, 3(1):160035. Number: 1 Publisher: Nature Publishing Group.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
Bioinformatics, 36(4):1234–1240.
Fei Li and Hong Yu. 2020. ICD Coding from Clinical Text Using Multi-Filter Residual Convolutional Neural Network. *Proceedings of the AAAI Conference on* Artificial Intelligence, 34(05):8180–8187. Number:
05.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv:1907.11692 [cs].
Junyu Luo, Cao Xiao, Lucas Glass, Jimeng Sun, and Fenglong Ma. 2021. Fusion: Towards Automated ICD Coding via Feature Compression. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 2096–2101, Online. Association for Computational Linguistics.
James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable Prediction of Medical Codes from Clinical Text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101–1111, New Orleans, Louisiana. Association for Computational Linguistics.
Damian Pascual, Sandro Luck, and Roger Wattenhofer.
2021. Towards BERT-based Automatic ICD Coding:
Limitations and Opportunities. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 54–63, Online. Association for Computational Linguistics.
Ines Rieger, Jaspar Pahl, Bettina Finzel, and Ute Schmid.
2022. CorrLoss: Integrating Co-Occurrence Domain Knowledge for Affect Recognition. In 2022 26th International Conference on Pattern Recognition (ICPR), pages 798–804. ISSN: 2831-7475.
Yanmin Sun, Andrew K. C. Wong, and Mohamed S.
Kamel. 2009. Classification of imbalanced data: a review. International Journal of Pattern Recognition and Artificial Intelligence, 23(04):687–719. Publisher: World Scientific Publishing Co.
Fei Teng, Wei Yang, Li Chen, LuFei Huang, and Qiang Xu. 2020. Explainable Prediction of Medical Codes With Knowledge Graphs. *Frontiers in Bioengineering and Biotechnology*, 8.
Shang-Chi Tsai, Ting-Yun Chang, and Yun-Nung Chen.
2019. Leveraging Hierarchical Category Knowledge for Data-Imbalanced Multi-Label Diagnostic Text Understanding. In *Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)*, pages 39–43, Hong Kong. Association for Computational Linguistics.
Thanh Vu, Dat Quoc Nguyen, and Anthony Nguyen.
2020. A Label Attention Model for ICD Coding from Clinical Text. volume 4, pages 3335–3341.
ISSN: 1045-0823.
Ke Wang, Xuyan Chen, Ning Chen, and Ting Chen.
2020. Automatic Emergency Diagnosis with Knowledge-Based Tree Decoding. volume 4, pages 3407–3414. ISSN: 1045-0823.
Chenwei Yan, Xiangling Fu, Xien Liu, Yuanqiu Zhang, Yue Gao, Ji Wu, and Qiang Li. 2022. A survey of automated International Classification of Diseases coding: development, challenges, and applications.
Intelligent Medicine, 2(3):161–173.
Ying Yu, Min Li, Liangliang Liu, Zhihui Fei, FangXiang Wu, and Jianxin Wang. 2019. Automatic ICD
code assignment of Chinese clinical notes based on multilayer attention BiRNN. *Journal of Biomedical* Informatics, 91:103114.
Min Zeng, Min Li, Zhihui Fei, Ying Yu, Yi Pan, and Jianxin Wang. 2019. Automatic ICD-9 coding via deep transfer learning. *Neurocomputing*, 324:43–50.
Zachariah Zhang, Jingshu Liu, and Narges Razavian.
2020. BERT-XML: Large Scale Automated ICD
Coding Using BERT Pretraining. In Proceedings of the 3rd Clinical Natural Language Processing Workshop, pages 24–34, Online. Association for Computational Linguistics.
Zhi-Hua Zhou and Xu-Ying Liu. 2006. Training costsensitive neural networks with methods addressing the class imbalance problem. *IEEE Transactions* on Knowledge and Data Engineering, 18(1):63–77.
Conference Name: IEEE Transactions on Knowledge and Data Engineering.
Quan Zou, Sifa Xie, Ziyu Lin, Meihong Wu, and Ying Ju. 2016. Finding the Best Classification Threshold in Imbalanced Classification. *Big Data Research*,
5:2–8.
![6_image_0.png](6_image_0.png)
## Dataset Statistics A
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
## B Results
PROC PROC+CPT PROC+DRG PROC+DIAG
ClinicalBERT original 0.0098 0.0094 0.0091 0.0097
CorrLoss 0.0102 0.0099 0.0088 0.0087
RoBERTa original 0.0097 0.0089 0.0087 0.0088
CorrLoss 0.0095 0.0095 0.0098 0.0089
Longformer original 0.0088 0.0088 0.0095 0.0085
CorrLoss 0.0094 0.0085 0.0091 0.0078
Table 4: Macro F1 scores of experiments, in which procedure ICD-9 is the main task, on full MIMIC-III test set.
DIAG DIAG+CPT DIAG+DRG DIAG+PROC
ClinicalBERT original 0.0068 0.0066 0.0066 0.0067
CorrLoss 0.0066 0.0069 0.0069 0.0068
RoBERTa original 0.0069 0.0065 0.0062 0.0065
CorrLoss 0.0071 0.0071 0.0066 0.0065
Longformer original 0.0072 0.0069 0.007 0.0071
CorrLoss 0.007 0.0068 0.0076 0.0071
Table 5: Macro F1 scores of experiments, in which diagnosis ICD-9 is the main task, on full MIMIC-III test set.
DIAG DIAG+CPT DIAG+DRG DIAG+PROC
ClinicalBERT original 0.3755 0.3296 0.3351 0.3351
CorrLoss 0.3235 0.2966 0.2947 0.2992
RoBERTa original 0.3851 0.3255 0.3307 0.3341
CorrLoss 0.3143 0.2822 0.2713 0.2939
Longformer original 0.4408 0.349 0.3544 0.3552
CorrLoss 0.3364 0.2963 0.2906 0.3027
Table 6: Macro F1 scores of experiments, in which diagnosis ICD-9 is the main task, on MIMIC-III-50 test set.
## C Justifcation Of Models Representation Of Correlations
The variant of ClinicalBERT we use is Bio+Discharge Summary BERT model because it was further trained on discharge summaries from MIMIC-III after initialized from BioBERT (Lee et al., 2020).
We use RoBERTa because it is a variant of vanilla BERT that was trained differently to improve its performance on a range of NLP tasks.
We use Longformer because it can handle long text sequences. BERT and many BERT-based models cannnot handle text sequences longer than 512 tokens. Many tokenized discharge summaries are text sequences longer than 512 tokens and Longformer can benefit from more complete understandings of discharge summaries.
Each model represents a different improvement on top of vanilla BERT: ClinicalBERT improves through domain-specific pretraining; RoBERTa improves through tuning training setup; and Longformer improves through incorporating more information from the input. With these models, we cover a significant part of the improvement spectrum, which shows that the pattern we present is generalizable to different models.
## D Analysis D.1 Performance On Each Label
Other figures Since there are 72 experiments that have auxiliary tasks, there are 72 corresponding plots. Thus, it is unreasonable to include all of them in the appendix. You can find all plots in our github repository: https://github.com/ nyuolab/text2table/tree/main/notebooks.
## Shannon Entropy
$$H(X)=-\sum_{i=1}^{n}p(x_{i})\log_{2}p(x_{i})\qquad{\mathrm{(3)}}$$
In this equation, H(X) represents the entropy of a label X with possible outcomes x1, x2, ..., xn.
In our context, n = 2 because a label only has two possible outcomes: 1 (positive) or 0 (negative).
The term p(xi) represents the probability of the i-th outcome, and the logarithm is taken with base 2 to give the result in units of bits. The sum is taken over all possible outcomes of X. With only two possible outcomes, a label's Shannon entropy will be close to 1 if it is balanced, and will be close to 0 if it is imblanced.
$$C(a,B)={\frac{\sum_{b\in B}|P(a,b)|}{c a r d(B)}}\qquad\qquad(4)$$
In this equation, C(*a, B*) represents the correlations between a label of the main task a and a set containing labels of the auxiliary task. For each label of the auxiliary task b ∈ B, |P(*a, b*)| represents the absolute value of the Pearson correlation coefficient bettwen a and b. *card*(B) is the cardinality of B (i.e. the number of labels in B).
## D.2 Performance In Different Scenarios
| top50 | bottom50 | | |
|--------------|------------|-------|-------|
| ClinicalBERT | +CPT | 0.453 | 0.32 |
| +DRG | 0.54 | 0.293 | |
| +PROC | 0.48 | 0.38 | |
| RoBERTa | +CPT | 0.48 | 0.313 |
| +DRG | 0.507 | 0.307 | |
| +PROC | 0.48 | 0.333 | |
| Longformer | +CPT | 0.5 | 0.32 |
| +DRG | 0.48 | 0.393 | |
| +PROC | 0.433 | 0.287 | |
Table 7: The percentages of positive macro F1 score changes on the top 50 most balanced diagnosis ICD-9 labels and on the bottom 50 least balanced diagnosis ICD-9 labels, with different auxiliary tasks and models.
CorrLoss is not included in all experiments we examine in this table.
| top50 | bottom50 | | |
|--------------|------------|-------|-------|
| ClinicalBERT | +CPT | 0.347 | 0.36 |
| +DRG | 0.327 | 0.313 | |
| +DIAG | 0.273 | 0.28 | |
| RoBERTa | +CPT | 0.32 | 0.32 |
| +DRG | 0.353 | 0.36 | |
| +DIAG | 0.273 | 0.22 | |
| Longformer | +CPT | 0.353 | 0.367 |
| +DRG | 0.28 | 0.293 | |
| +DIAG | 0.307 | 0.26 | |
Table 8: The percentages of positive macro F1 score changes on the top 50 most balanced procedure ICD-9 labels and on the bottom 50 least balanced procedure ICD-9 labels, with different auxiliary tasks and models.
CorrLoss is included in all experiments we examine in this table.
| top50 | bottom50 | top50 | bottom50 | | | |
|--------------|------------|---------|--------------|------|-------|-------|
| ClinicalBERT | +CPT | 0.413 | 0.307 | | | |
| +DRG | 0.533 | 0.28 | | | | |
| +PROC | 0.487 | 0.293 | | | | |
| RoBERTa | +CPT | 0.46 | 0.3 | | | |
| +DRG | 0.493 | 0.373 | | | | |
| +PROC | 0.473 | 0.34 | | | | |
| Longformer | +CPT | 0.453 | 0.293 | | | |
| +DRG | 0.487 | 0.34 | | | | |
| +PROC | 0.5 | 0.307 | ClinicalBERT | +CPT | 0.507 | 0.333 |
| +DRG | 0.493 | 0.287 | | | | |
| +PROC | 0.473 | 0.347 | | | | |
| RoBERTa | +CPT | 0.48 | 0.247 | | | |
| +DRG | 0.513 | 0.36 | | | | |
| +PROC | 0.46 | 0.347 | | | | |
| Longformer | +CPT | 0.487 | 0.313 | | | |
| +DRG | 0.493 | 0.34 | | | | |
| +PROC | 0.427 | 0.313 | | | | |
Table 9: The percentages of positive macro F1 score changes on the top 50 most balanced diagnosis ICD-9 labels and on the bottom 50 least balanced diagnosis ICD-9 labels, with different auxiliary tasks and models.
CorrLoss is included in all experiments we examine in this table.
Table 11: The percentages of positive macro F1 score changes on the top 50 diagnosis ICD-9 labels that are most correlated with the auxiliary task and on the bottom 50 diagnosis ICD-9 labels that are least correlated with the auxiliary task, with different auxiliary tasks and models. CorrLoss is not included in all experiments we examine in this table.
| ClinicalBERT | +CPT | 0.467 | 0.32 |
|----------------|--------|---------|--------|
| RoBERTa | +CPT | 0.387 | 0.267 |
| Longformer | +CPT | 0.427 | 0.367 |
Table 10: The percentages of positive macro F1 score changes on the top 50 procedure ICD-9 labels that are most correlated with the auxiliary task and on the bottom 50 procedure ICD-9 labels that are least correlated with the auxiliary task, with different auxiliary tasks and models. CorrLoss is not included in all experiments we examine in this table.
top50 bottom50
ClinicalBERT +CPT 0.467 0.373
+DRG 0.52 0.3
+PROC 0.46 0.333
RoBERTa +CPT 0.493 0.32
+DRG 0.52 0.433
+PROC 0.473 0.253
Longformer +CPT 0.46 0.32
+DRG 0.513 0.467
+PROC 0.453 0.34
top50 bottom50
Table 12: The percentages of positive macro F1 score changes on the top 50 diagnosis ICD-9 labels that are most correlated with the auxiliary task and on the bottom 50 diagnosis ICD-9 labels that are least correlated with the auxiliary task, with different auxiliary tasks and models. CorrLoss is included in all experiments we examine in this table. |
baran-etal-2023-classical | Classical Out-of-Distribution Detection Methods Benchmark in Text Classification Tasks | https://aclanthology.org/2023.acl-srw.20 | State-of-the-art models can perform well in controlled environments, but they often struggle when presented with out-of-distribution (OOD) examples, making OOD detection a critical component of NLP systems. In this paper, we focus on highlighting the limitations of existing approaches to OOD detection in NLP. Specifically, we evaluated eight OOD detection methods that are easily integrable into existing NLP systems and require no additional OOD data or model modifications. One of our contributions is providing a well-structured research environment that allows for full reproducibility of the results. Additionally, our analysis shows that existing OOD detection methods for NLP tasks are not yet sufficiently sensitive to capture all samples characterized by various types of distributional shifts. Particularly challenging testing scenarios arise in cases of background shift and randomly shuffled word order within in domain texts. This highlights the need for future work to develop more effective OOD detection approaches for the NLP problems, and our work provides a well-defined foundation for further research in this area. | # Classical Out-Of-Distribution Detection Methods Benchmark In Text Classification Tasks Mateusz Baran1,2 Joanna Baran1 **Mateusz Wójcik**1,2 **Maciej Zi˛Eba**1,3 **Adam Gonczarek**2
1Wroclaw University of Science and Technology
{firstname.lastname}@pwr.edu.pl 2Alphamoon Ltd., Wrocław
{firstname.lastname}@alphamoon.ai 3Tooploox Ltd., Wrocław
## Abstract
State-of-the-art models can perform well in controlled environments, but they often struggle when presented with out-of-distribution
(OOD) examples, making OOD detection a critical component of NLP systems. In this paper, we focus on highlighting the limitations of existing approaches to OOD detection in NLP.
Specifically, we evaluated eight OOD detection methods that are easily integrable into existing NLP systems and require no additional OOD
data or model modifications. One of our contributions is providing a well-structured research environment that allows for full reproducibility of the results. Additionally, our analysis shows that existing OOD detection methods for NLP
tasks are not yet sufficiently sensitive to capture all samples characterized by various types of distributional shifts. Particularly challenging testing scenarios arise in cases of background shift and randomly shuffled word order within in domain texts. This highlights the need for future work to develop more effective OOD detection approaches for the NLP problems, and our work provides a well-defined foundation for further research in this area.
## 1 Introduction
Systems based on artificial intelligence (AI) have to be safe and trustworthy (Amodei et al., 2016). Ensuring user reliance on these systems requires a cautious approach in making predictions. AI
tools should avoid decisions on examples that significantly deviate from the training data. This is especially risky when the classifier shows excessive confidence in its incorrect decisions, leading to the propagation of errors in the system pipeline (Commission et al., 2019). However, current models are often trained under the closed-world assumption, limited to specific domains (Park et al., 2022).
Test sets drawn from the same domain for evaluation may not reflect real-world scenarios accurately (Teney et al., 2020). This poses challenges when deploying such models in production environments (Schrouff et al., 2022).
![0_image_0.png](0_image_0.png)
Real-world data is often completely different from training one. The change in data distribution can be caused by several factors such as user behavior, legal regulations, market trends or seasonal changes. In an *open-world* scenario, the AI-based system can be even exposed to inputs that deviate from the trained task. A significant risk that may arise is the possibility of model overconfidence while predicting data of this nature. As a result, there is a business need for detecting examples outside the domain (Hendrycks and Gimpel, 2017).
Out-of-distribution (OOD) detection techniques can be well applied in a production system with human-in-the-loop technology (Wu et al., 2022),
where it is important to quickly identify whether an input sample is characterized by a distributional shift. Such an example should be handled then by a human expert in order to avoid potential misclassification by the model. The essence of such systems is to find a trade-off between the accuracy and automation (Mosqueira-Rey et al., 2022) (Figure 1).
This way, the model can achieve the highest possible performance on in-distribution (ID) data and difficult shifted data can be given to human verification, thus increasing the credibility of the overall system. The bottleneck here is a well-designed OOD detection method, which must be sensitive enough to capture all examples outside the domain.
119 The problem of OOD identification is mainly investigated for vision classification tasks (Yang et al., 2022a; Kuan and Mueller, 2022), whereas in the field of NLP, studies on this topic are limited.
We fill the missing gap by proposing a comprehensive analysis of existing OOD approaches for text classification tasks. In this work, we focus on the **post-hoc** techniques which are most suitable for business applications i.e. they have to fulfil the requirement of smooth integration into existing systems, without the need for additional OOD training data or any changes in model architecture. Ultimately, we evaluated eight methods in two different scenarios. The first one includes grouping test data into three splits according to the similarity to the in-distribution set: Near-OOD, *Far-OOD* and Distinct-OOD (Yang et al., 2021). The AI system is evaluated based on the degree of domain difference between training and test samples. The second scenario considers the division of datasets according to the shift of distribution (Arora et al., 2021). There are many categories of distribution shift (Hupkes et al., 2022), but in this study, we consider two types - semantic and background. **Semantic shift**
occurs when new labels appear, which may be due to the lack of a sufficient number of classes representing the training data or the emergence of new classes over time. In distinction, the **background**
shift is class independent. It appears when the characteristic features of text change (e.g. source origin, writing style), which can happen even within the same class. The reason may be language evolution, regional conditions, etc. - such factors are difficult to predict and adequately address in the training set.
By preparing data separated into different kinds of shift, we gain an in-depth insight into the origin of the data, on which a particular OOD detection method performs better or worse.
We also provide a well-structured research environment that allows the full reproducibility of the achieved outcomes and evaluation of another NLP
models. The source code is available on GitHub1.
To summarize, our contribution is as follows:
- we adjust the existing OOD detection techniques to the text classification problems,
- we comprehensively evaluate the revised methods using two different scenarios tailored to the NLP domain,
- we deliver the complete experimental framework for evaluating the OOD methods.
## 2 Related Work
In recent years, there has been a growing interest in developing robust methods that can detect outof-distribution examples. The work of Hendrycks and Gimpel (2017) has played a significant role in advancing this field. Their Maximum Softmax Probability (MSP) method, which relies on the softmax output of a neural network, has become a reference for subsequent research and still remains as the solid baseline approach (Zhang et al., 2023).
The benefit of the MSP was its independence from the specific task domain. Since then, many researchers have extended this method or proposed novel techniques to address the challenge of detecting OOD data.
The first to popularize the interest in the OOD
topic were computer vision (CV) researchers (Bengio et al., 2011). The emerged techniques in this field were summarized in a survey by Yang et al.
(2021). The authors proposed a unified framework that groups OOD detection methods into categories based on their common underlying mechanisms.
Among them, the following ones can be distinguished: (1) **output-based** (Liu et al., 2020; Liang et al., 2018) techniques which detect OOD samples based on output vector obtained by classification model for given input; (2) **gradient-based** (Huang et al., 2021) focus on analyzing the fluctuation of the gradient flow through the model layers to verify that the input is OOD; (3) **density-based** (Zong et al., 2018) methods involve modeling a density function from the training set and then determining whether a new example belongs to the same distribution; (4) **distance-based** (Sun et al., 2022; Ren et al., 2021) measure the dissimilarity between a new input and the training data by computing standard metrics such as cosine similarity, Euclidean or Mahalanobis distance. Another work of Yang et al.
(2022a) provides a comprehensive evaluation of 13 methods for OOD detection in CV. Notably, the experimental results show that simple preprocessing techniques can be highly effective, outperforming even more sophisticated methods in identifying OOD examples. In addition, post-hoc methods have demonstrated considerable effectiveness in OOD detection and have made significant impact in this task. The NLP community is also more and more interested in addressing the challenge of OOD
detection data, especially after the appearance of text processing automation systems. Despite the expectation that pre-trained language models (PLMs)
would generalize well to unseen data, many existing transformer-based architectures perform poorly in an open-world assumption setup. This was proven by the work (Yang et al., 2022b) where the authors created the GLUE-X benchmark to reliably test the robustness of PLMs against OOD samples exposure, without using any of the previously mentioned techniques dedicated to OOD. Their achieved results confirm the necessity of further development of OOD detection methods. Currently, researchers are continuously proposing techniques tailored for the NLP tasks (Rawat et al., 2021; Zhou et al., 2021), revisiting existing ones (Podolskiy et al., 2021) or designing completely novel approaches that can address specific shifts in data distribution (Arora et al., 2021; Chen et al., 2023).
The latter two publications particularly highlight the importance of dividing datasets into semantic and background shift sets, as they provide valuable findings and a better understanding of how the model works on different data types.
Evidently, there have been several NLP articles addressing OOD detection, but their comparison to existing methods has been limited. A comprehensive study which evaluates various OOD detection approaches on a larger scale and addressing the specific needs of businesses is still lacking. To fill this gap, we have developed a benchmark that provides a fair comparison of these techniques while testing their performance across different distributional shift scenarios. All the selected methods have been inspired by CV achievements, and we have specifically chosen those that can be easily integrated into an existing AI system with minimal complexity.
## 3 Benchmark Outline
This section provides an overview of the datasets and the model architecture, with a detailed description of the techniques reimplemented in our benchmark for detecting out-of-domain examples. The metrics used for evaluating the effectiveness of the detection methods are also presented.
## 3.1 Datasets
News Category Dataset (Misra, 2022) is one of the biggest news dataset. It contains around 210k news headlines from HuffPost published between 2012 and 2022. The dataset comprises of 42 classes that are heavily imbalanced. Therefore, the most similar classes were combined to avoid confusion between similar classes. Ultimately, we obtained 17 representative classes.
Twitter Topic Classification (Antypas et al., 2022)
is a topic classification dataset collected from Twitter posts. It consists of 3184 high-quality tweets that have been assigned to one of six classes.
SST-2 (The Stanford Sentiment Treebank) (Socher et al., 2013) is a corpus with fully labeled parse trees that allows for an analysis of the compositional effects in language sentiment. The corpus includes almost 70k sentences extracted from movie reviews. Sentences were annotated with regard to their polarization (positive or negative).
IMDB (Maas et al., 2011) is a large collection of movie reviews from the Internet Movie Database created for the binary sentiment classification task.
According to the original 10-point movie rating scale from the website, the dataset samples were filtered to include only highly polarized texts annotated as positive (≥ 7) or negative (≤ 4).
Yelp Polarity Review (Zhang et al., 2015) dataset includes almost 600k customer reviews which are labeled as positive or negative based on the number of stars given by the reviewer. Specifically, texts with ≤ 2 stars are labeled as negative, while those with ≥ 3 are labeled as positive. Due to the large size of the dataset, we created a smaller version by randomly selecting a subset of 75k reviews.
Language Detection Dataset (Saji, 2021) is a small dataset for language detection task. It contains texts in 17 different languages. For benchmark purposes, we filter out languages that do not use Latin alphabet. We've also excluded English texts to create a clear out-of-distribution dataset.
Finally, dataset consist around 6k samples and all of them are used for OOD evaluation.
20 Newsgroups (McGraw Hill, 1995) consists of around 18k newsgroups posts on 20 topics. It is divided in two sets for training and evaluation. Moreover, we allocated an additional subset from the training set for validation purposes.
## 3.2 Model
In all experiments, we used transformerbased (Vaswani et al., 2017) RoBERTabase (Liu et al., 2019) model as a backbone with a fully connected layer as a classification head. The model was pretrained on English corpora, but it supports multiple languages.
## 3.3 Methods
We decided to compare **post-hoc** methods that are suitable to apply to trained models. They mainly use information based on model statistics such as intermediate layer values, gradients or nondeterministic properties of dropout regularization, etc. Their implementation is technically straightforward and independent of the type of model used.
![3_image_0.png](3_image_0.png)
An overview of our benchmark methodology is outlined in Figure 2. In addition to label prediction, we obtain a real-valued *confidence* score that indicates the level of confidence that the model has in whether the given sample belongs to the ID
data. We reimplemented eight OOD detection techniques and adapted them to the NLP classification pipeline.
(1) **Maximum Softmax Probability**
(MSP) (Hendrycks and Gimpel, 2017) employs the softmax score to check the certainty of whether an example belongs to a domain - we refer to it as the baseline method in our work.
(2) **Energy-based** (Liu et al., 2020) uses an energy score function to indicate model confidence.
(3) **Rectified Activations (ReAct)** (Sun et al.,
2021) is a simple technique for reducing model overconfidence on OOD examples by truncating the high activations during evaluation.
(4) **KL-Matching (KLM)** (Hendrycks et al.,
2022) calculates the minimum KL-divergence between the softmax probabilities and the mean classconditional distributions.
(5) **GradNorm** (Huang et al., 2021) utilizes information obtained from the gradient space of model's classification layer. This approach uses the vector norm of gradients to distinguish between ID and OOD samples, with the assumption that higher norm values correspond to in-distribution data.
(6) **Directed Sparisification (DICE)** (Sun and Li, 2022) selectively chooses a subset of weights through sparsification, which helps to eliminate irrelevant information from the output.
(7) **Virtual-logit Matching (ViM)** (Wang et al.,
2022a) combines information from feature space
(PLM embedding) and output logits, providing both class-agnostic and class-dependent knowledge simultaneously for better separation of OOD data.
(8) **K-nearest neighbors (KNN)** (Sun et al., 2022) computes the distance between the embedding of an input example and the embeddings of the training set, and uses it to determine whether the example belongs to the ID or not.
The first four methods use signals originating from the output layer of the model. GradNorm focuses solely on the gradients that flow through the classification head, while methods from 6 to 8 operate on the embedding of a PLM. Most techniques (specifically no. 3-4, 6-8) need an initial configuration on the training or validation set to estimate the required statistics for ID data. To ensure consistency in the benchmarking process, the hyperparameters for the above methods were set to the values recommended in their original papers.
## 3.4 Metrics
To compare the chosen methods, we used three the most common metrics for OOD detection.
AUROC calculates the area under the Receiver Operating Characteristic (ROC) curve. The ROC
curve plots the true positive rate against the false positive rate, and a larger area under the curve indicates better performance. This was used as our primary evaluation metric.
AUPR-IN measures the area under the PrecisionRecall (PR) curve. The PR curve displays how well the method can identify true positives with high precision, and AUPR provides a measure of overall performance. The *"IN"* suffix indicates that this metric pertains to in-distribution data.
FPR@95 is the false positive rate when the true positive rate is set to 95%. Lower scores indicate better performance.
Table 1: Datasets setup for experiments.
| Dataset | #Classes | Train / Val / Test | Avg. words |
|-----------|------------|-----------------------|--------------|
| NC/I | 7 | 66223 / 26475 / 39688 | 9.95 |
| NC/O | 10 | - / - / 48522 | 9.77 |
| Twitter | 6 | - / - / 3184 | 29.80 |
| IMDB | 2 | 25000 / 5000 / 20000 | 231.15 |
| SST-2 | 2 | 43221 / 5000 / 20000 | 9.53 |
| Yelp | 2 | 50000 / 5000 / 20000 | 133.11 |
| Language | 9 | - / - / 5864 | 19.08 |
| NCR/I | 7 | - / - / 39688 | 9.95 |
| NCR/O | 10 | - / - / 48522 | 9.77 |
| Computer | 5 | 2965 / 456 / 1460 | 218.63 |
| Politics | 4 | 1959 / 315 / 979 | 406.53 |
| Sports | 4 | 2363 / 432 / 1182 | 224.43 |
## 4 Data Preparation
In our study, we have paid particular attention to provide a complete and unbiased comparison of OOD detection methods. To achieve this goal, we adopted two diverse perspectives: one inspired by the field of computer vision (Yang et al., 2022a) and the other drawn from works dedicated to the NLP
domain (Rawat et al., 2021; Arora et al., 2021).
## 4.1 Scenario 1
The first perspective intends to provide a detailed analysis of considered techniques based on the similarity between OOD examples and the training set. The degree of similarity is defined here in a human-intuitive way, taking into account such factors as thematic proximity, task dissimilarity or the sentence correctness.
As a base in-distribution data, we chose *News* Category dataset using the seven most popular classes (**NC/I**). The remaining classes were considered as out-of-distribution split (**NC/O**) which represents data in close semantic shift. The *Twitter Topic Classification* dataset has categories that are similar to those in the *News Category* dataset, but the sentence construction is significantly different. Both sets create the **Near-OOD** data setup.
Another prepared collection, **Far-OOD** includes datasets with reviews of movies, hotels and restaurants that are vastly different from *NC/I* data - it is a connection of SST-2, *Yelp* and *IMDB*. Additionally, we prepared one more group named **Distinct-OOD**
containing *Language Detection* dataset. With the inclusion of non-English texts there, we obtain a distinct set of tokens that the RoBERTa model has not encountered before, creating a completely separate dataset from the in-distribution data.
Finally, we also designed two collections derived from the *News Category* dataset by randomly shuffling words from all those available within each category. The new dataset, called News Category Random, retained the original number of examples and the number of words in each sample. These sets aimed to examine the classification system behavior when presented with input sentences that are completely disrupted from their original context.
The previous partition into ID (**NCR/I**) and OOD
(**NCR/O**) subsets was maintained.
## 4.2 Scenario 2
This scenario investigated the performance of detection methods for OOD examples under semantic and background shift. For semantic shift, we utilized the *20 Newsgroups* dataset that is a hierarchical collection of documents. Among the four top-level categories, we selected three - **Computer**, Sports, and **Politics** - as training sets for the model, while excluding the *"misc"* category due to potential data leakage issues. Subsequently, we generated various combinations of these categories, treating each one in turn as an in-distribution set, while considering the others as a OOD data. For example, the model could be trained on the samples from Computer class (ID dataset) and evaluated later on Sports and Politics (OOD).
In order to test the impact of background shift, we took three sentiment classification datasets –
IMDB, *SST-2* and *Yelp*, which are based on user reviews and represent different domains. Although these datasets have similar linguistic properties, the topics they address are distinct. Again, we constructed various combinations of these collections by treating each one as the ID set and the others as OOD sets.
## 5 Experiments
In this section, we describe the details of a training procedure and present the outcomes from the experiments.
## 5.1 Training Setup
The PLM fine-tuning duration took maximally 100 epochs with an early stopping mechanism (Raskutti et al., 2011) applied (patience = 10 epochs). By using this technique, we were able to conserve computational resources while still obtaining highperforming models. The learning rate hyperparameter was always set to 2e−5. To prevent overfitting and enhance the model's generalization capabilities, we used a weight decay wd = 0.01 with
MSP 74.2±0.3 74.8±2.4 96.6±3.1 84.2±3.3 95.3±1.5 95.1±1.9 59.0±0.8 80.5±0.6
Energy 77.6±0.4 84.8±1.9 99.6±0.5 92.6±2.6 98.6±0.7 98.7±0.6 60.1±1.0 84.9±0.7 GradNorm 77.2±0.5 81.8±2.7 99.0±1.1 90.8±2.2 97.8±0.8 97.8±0.7 60.5±1.4 85.0±0.8 KLM 62.9±0.4 54.0±3.8 92.5±6.2 67.7±4.6 88.9±3.7 86.7±3.9 50.6±0.1 68.5±0.6 ReAct 77.5±0.4 84.5±2.0 99.6±0.5 92.4±2.8 98.6±0.7 98.7±0.6 60.0±1.0 84.7±0.7 DICE 58.2±0.6 60.9±3.2 76.6±5.8 60.9±1.4 84.4±2.2 69.3±2.8 51.2±0.9 60.4±1.4
KNN 80.1±0.2 92.9±1.2 99.8±0.1 96.4±1.1 99.5±0.1 99.6±0.1 67.6±1.3 88.7**±0.5**
ViM 79.9±0.2 89.2±1.5 90.6±3.1 96.0±0.9 92.9±1.6 98.1±0.8 60.7±0.8 86.1±0.4
Near-OOD Far-OOD Distinct-OOD
Method NC/O Twitter IMDB SST-2 Yelp Language NCR/I NCR/O
Adam optimizer (Zhang, 2018). The best performing model was selected based on F1-score achieved on the validation set, and the final results were reported on the test set (see Appendix A). To minimize the influence of randomness on the outcomes, we trained PLM five times for each task using different initial seeds.
During each experiment, the PLM was finetuned on ID data, which consisted of training and validation splits. The evaluation of the OOD detection methods themselves was performed on predefined test data. A complete overview of the split sizes along with the number of classes in all data collections is presented in Table 1.
## 5.2 Results
The outcomes from experiments on data prepared in the first scenario (Section 4.1) are shown in Table 2. The KNN clearly outperformed the other OOD detection techniques on all three data groups.
Energy-based method also stands out with its good results as well as ViM, except with its results on IMDB and Yelp dataset (worse than baseline MSP).
As expected, the values of evaluation metrics on the NC/O dataset were the lowest among *Near-OOD*
and *Far-OOD* divisions. This dataset was separated from the original dataset used in the training, making it the most difficult to properly identify as OOD due to the distributional closeness. The most challenging among the *Far-OOD* collections appeared to be *SST-2*, probably because of a small average number of words per example. The *Language* turned out to be the easiest dataset to detect OOD samples, and almost all methods performed well on it. The two worst performing approaches on the presented NLP tasks can be distinguished, i.e. *DICE* and KLM. Their measures were always worse than MSP, sometimes even nearly random
(a little above 50%) - *DICE* on NC/O and KLM on Twitter.
Interesting results can be seen in the last part of Table 2. Randomization of words in case of NC/O dataset (which created NCR/O) significantly increased the model confidence in detecting OOD examples comparing with initial NC/O samples. However, the OOD methods could not cope well with shuffled in-domain *News category* data
(NCR/I), which a human would recognize as the OOD.
![5_image_0.png](5_image_0.png)
Table 3 presents AUROC scores obtained from the second scenario (Section 4.2) evaluation. The results demonstrate that the ViM method is more effective in detecting OOD samples with semantic shift to ID data. However, for background shift data, ViM is not always the best and is outperformed by KNN on IMDB and Yelp datasets. The SST-2 dataset proved to be problematic again, but
ID OOD MSP Energy GradNorm KLM ReAct DICE KNN ViM
Computer Politics 91.5±1.9 96.3±1.1 95.5±0.9 78.0±7.3 96.2±1.2 34.6±13.2 97.0±0.5 98.6**±0.3**
Sports 89.8±2.7 94.9±1.6 94.1±1.6 74.5±4.6 94.6±1.7 51.9±6.9 95.7±0.9 97.7**±0.6**
Politics Computer 94.4±0.8 96.0±0.6 95.5±0.7 82.8±4.6 95.9±0.6 63.9±3.2 96.9±0.2 98.3**±0.2**
Sports 91.4±1.1 93.4±0.9 92.9±1.0 72.3±5.6 93.3±0.9 58.6±2.4 95.3±0.4 97.3**±0.3**
Sports Computer 95.7±0.6 97.0±0.9 96.8±0.5 81.6±3.9 96.9±0.9 58.1±7.6 97.6±0.4 98.5**±0.2**
Politics 95.3±0.2 96.5±0.6 96.4±0.5 79.9±2.5 96.5±0.7 52.4±11.5 97.2±0.3 98.0**±0.1**
IMDB SST-2 85.3±0.8 84.3±1.8 77.8±3.0 61.2±1.7 84.5±1.9 84.6±3.3 97.8**±1.2** 97.3±0.7
Yelp 76.0±3.3 74.9±4.1 66.2±3.6 32.0±1.0 75.3±4.3 49.6±8.6 97.5±1.1 98.4**±0.8**
SST-2 IMDB 83.2±1.4 82.7±2.2 70.3±2.3 55.0±2.7 83.3±2.4 34.5±10.7 87.2**±1.7** 83.9±3.3
Yelp 75.7±2.2 75.0±3.1 61.3±2.7 51.3±3.0 75.7±3.4 35.4±8.4 87.8**±0.4** 80.1±2.8
Yelp IMDB 79.5±0.5 79.2±1.6 71.7±1.9 38.6±1.3 79.5±1.6 26.8±5.1 84.7±0.8 88.6**±0.7**
SST-2 91.6±0.5 91.5±0.9 86.1±1.0 59.9±2.5 91.7±0.9 55.8±8.5 98.5±0.3 99.0**±0.1**
only when used as a training set. It is worth noting that the average length of texts per SST-2 is considerably different from IMDB and Yelp collections, which mainly contain longer texts. These observations suggest that KNN is more stable in terms of different data characteristics. To further emphasize the importance of comparing methods based on the type of shift, we created a visualization in Figure 3. The ReAct, *Energy*, and *GradNorm* techniques turned out to be better than the baseline, but only for the semantic shift case.
To summarize, either KNN or ViM is the preferred choice among all the analyzed OOD detection approaches. Other reported metric values
(AUPR-IN and FPR@95) from all experiments are attached in Appendix B.
## 5.3 Computational Resources
All experiments were conducted on a workstation equipped with a mid-range *Nvidia RTX 3060* GPU with 12GB of memory, a high-end *Intel(R)*
Core(TM) i9-10900X CPU with 20 cores and 40 threads, and 256 GB RAM. These resources provided sufficient capacity for running the experiments and training the models used in this work, including analysis and processing of large datasets.
In total, we trained 35 models, taking 222 GPUhours while evaluation alone lasted 124 GPUhours.
## 6 Conclusions
The latest advancements in OOD detection techniques have surpassed the conventional MSP baseline. In this work, we applied some of them to the NLP classification problems, selecting only posthoc approaches because of their easy integration to already trained PLM model. Most of the examined techniques achieved better results than the MSP, but their performance varied when subjected to different types of data distributional shift. Background shift was particularly challenging for the majority of methods to properly distinguish OOD
examples. The KNN and ViM methods were found to be the most effective, and their performance was also stable. Hence, they are better alternatives to MSP for out-of-distribution detection. However, it should be kept in mind that it is likely that the ViM
method is sensitive to cases where the language model was trained on short texts and later exposed to a long text from outside the domain.
The proposed by us the unique analysis of Distinct-OOD scenario, allowed to draw interesting findings. The tested methods were able to identify texts in different languages very easily as a OOD
examples, but they had problems detecting OOD
on the *News Category Random* with shuffled data.
This means that PLM models, despite their ability to detect contextual nuances in text, still tends to behave like Bag-of-Words (Zhang et al., 2010)
in text classification tasks. Business-wise, such structurally disturbed examples should not be further processed by AI systems. Therefore, OOD
methods employed in NLP should better address semantic disorders in input sentences.
In conclusion, the overall performance of current OOD detection techniques is still low and unsatisfactory, particularly when presented with the Near-OOD samples. Further research is necessary for the development of OOD detection methods, especially in the field of NLP, where more and more document processing automation systems are being developed, where ensuring reliability is important for users. Our work addresses the need for a comprehensive framework to evaluate the quality of OOD detection and provides easy extensibility to emerging methods.
## 7 Limitations
While our study provides valuable insights, it is important to keep in mind its limitations. Firstly, it was confined to text classification and did not include other NLP problems such as Named Entity Recognition (NER) (Wang et al., 2022b), Question Answering (QA) (Pandya and Bhatt, 2021),
etc. Expanding this research to a wider range of tasks would provide a better understanding of the methods' performance in diverse data scenarios.
Additionally, the inclusion of a task shift can be valuable, where the model is trained on a single task but OOD data come from a totally different prediction problems.
Secondly, we conducted our experiments using only RoBERTa model. We chose a widely used language model for text classification, but there are several other architectures worth testing, especially large language models (LLMs) (Zhao et al., 2023)
that now becoming extremely popular. A more comprehensive evaluation of the models and methods could provide more insights into whether the development of transformer-based methods contributes to better detection of OOD data.
Finally, due to restricted computational time, we did not perform a hyperparameter search for either model or methods. We just used recommend values from the original publications. This may have affected the obtained results, and it is certainly an aspect worth investigating in the future.
## 8 Ethics Statement
The authors believe that their work does not raise any ethical questions of harm or discrimination. Moreover, they acknowledge that the benchmark has a wide range of potential applications and want to make it clear that they are not responsible for any unethical applications of their work.
## Acknowledgements
The research was conducted under the Implementation Doctorate programme of Polish Ministry of Science and Higher Education
(DWD/6/0322/2022) with cooperation of the Artificial Intelligence Department at Wroclaw University of Science and Technology. It was partially co-funded by the European Regional Development Fund within the Priority Axis 1 "Enterprises and innovation", Measure 1.2. "Innovative enterprises, sub-measure 1.2.1. "Innovative enterprises - horizontal competition" as part of ROP WD 2014-2020, support contract no. RPDS.01.02.01-02-0063/2000. The work conducted by Maciej Zieba was supported by the National Centre of Science (Poland)
Grant No. 2021/43/B/ST6/02853.
## References
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F.
Christiano, John Schulman, and Dan Mané.
2016. Concrete problems in AI safety. *CoRR*,
abs/1606.06565.
Dimosthenis Antypas, Asahi Ushio, Jose CamachoCollados, Leonardo Neves, Vitor Silva, and Francesco Barbieri. 2022. Twitter Topic Classification. In Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Udit Arora, William Huang, and He He. 2021. Types of out-of-distribution texts and how to detect them.
Yoshua Bengio, Frédéric Bastien, Arnaud Bergeron, Nicolas Boulanger–Lewandowski, Thomas Breuel, Youssouf Chherawala, Moustapha Cisse, Myriam Côté, Dumitru Erhan, Jeremy Eustache, Xavier Glorot, Xavier Muller, Sylvain Pannetier Lebeuf, Razvan Pascanu, Salah Rifai, François Savard, and Guillaume Sicard. 2011. Deep learners benefit more from out-of-distribution examples. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of *Proceedings* of Machine Learning Research, pages 164–172, Fort Lauderdale, FL, USA. PMLR.
Sishuo Chen, Wenkai Yang, Xiaohan Bi, and Xu Sun.
2023. Fine-tuning deteriorates general textual outof-distribution detection by distorting task-agnostic features. *EACL*.
European Commission, Content Directorate-General for Communications Networks, and Technology. 2019.
Ethics guidelines for trustworthy AI. Publications Office.
Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joe Kwon, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. 2022. Scaling out-ofdistribution detection for real-world settings.
Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. Proceedings of International Conference on Learning Representations.
Rui Huang, Andrew Geng, and Yixuan Li. 2021. On the importance of gradients for detecting distributional shifts in the wild. In *NeurIPS*, volume abs/2110.00218.
A. V. Podolskiy, Dmitry Lipin, A. Bout, E. Artemova, and Irina Piontkovskaya. 2021. Revisiting mahalanobis distance for transformer-based out-of-domain detection. In *AAAI Conference on Artificial Intelligence*.
Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Christos Christodoulopoulos, Karim Lasri, Naomi Saphra, Arabella Sinclair, Dennis Ulmer, Florian Schottmann, Khuyagbaatar Batsuren, Kaiser Sun, Koustuv Sinha, Leila Khalatbari, Maria Ryskina, Rita Frieske, Ryan Cotterell, and Zhijing Jin. 2022. State-of-the-art generalisation research in nlp: a taxonomy and review.
CoRR, abs/2210.03050.
Garvesh Raskutti, Martin J. Wainwright, and Bin Yu.
2011. Early stopping for non-parametric regression:
An optimal data-dependent stopping rule. In 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 1318–
1325.
Mrinal Rawat, Ramya Hebbalaguppe, and Lovekesh Vig.
2021. Pnpood : Out-of-distribution detection for text classification via plug andplay data augmentation.
Johnson Kuan and Jonas Mueller. 2022. Back to the basics: Revisiting out-of-distribution detection baselines.
Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji Lakshminarayanan. 2021.
A simple fix to mahalanobis distance for improving near-ood detection. *CoRR*, abs/2106.09022.
Shiyu Liang, Yixuan Li, and R. Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. In *International Conference* on Learning Representations.
Basil Saji. 2021. A dataset for language detection. https://www.kaggle.com/datasets/basilb2s/
language-detection. Accessed: 2023-04-15.
Jessica Schrouff, Natalie Harris, Sanmi Koyejo, Ibrahim M Alabdulmohsin, Eva Schnider, Krista Opsahl-Ong, Alexander Brown, Subhrajit Roy, Diana Mincu, Christina Chen, Awa Dieng, Yuan Liu, Vivek Natarajan, Alan Karthikesalingam, Katherine A Heller, Silvia Chiappa, and Alexander D' Amour.
2022. Diagnosing failures of fairness transfer across distribution shift in real-world medical settings. In Advances in Neural Information Processing Systems, volume 35, pages 19304–19318. Curran Associates, Inc.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Eduardo Mosqueira-Rey, Elena Hernández-Pereira, David Alonso-Ríos, José Bobes-Bascarán, and Ángel Fernández-Leal. 2022. Human-in-the-loop machine learning: A state of the art. *Artif. Intell. Rev.*,
56(4):3005–3054.
Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li.
2022. Out-of-distribution detection with deep nearest neighbors. In International Conference on Machine Learning, pages 20827–20840. PMLR.
Hariom A Pandya and Brijesh S Bhatt. 2021. Question answering survey: Directions, challenges, datasets, evaluation matrices. *arXiv preprint* arXiv:2112.03572.
Damien Teney, Kushal Kafle, Robik Shrestha, Ehsan Abbasnejad, Christopher Kanan, and Anton van den Hengel. 2020. On the value of out-of-distribution testing: An example of goodhart's law. In *Proceedings of the 34th International Conference on Neural Information Processing Systems*, NIPS'20, Red Hook, NY, USA. Curran Associates Inc.
Hyunji Park, Yogarshi Vyas, and Kashif Shah. 2022.
Efficient classification of long documents using transformers. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 2: Short Papers), pages 702–709, Dublin, Ireland. Association for Computational Linguistics.
Weitang Liu, Xiaoyun Wang, John D. Owens, and Yixuan Li. 2020. Energy-based out-of-distribution detection. In *NeurIPS*.
McGraw Hill. 1995. 20 newsgroups dataset.
Rishabh Misra. 2022. News category dataset. arXiv preprint arXiv:2209.11429.
Yiyou Sun, Chuan Guo, and Yixuan Li. 2021. React: Out-of-distribution detection with rectified activations. In *NeurIPS*, pages 144–157.
Yiyou Sun and Yixuan Li. 2022. Dice: Leveraging sparsification for out-of-distribution detection.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021.
Contrastive out-of-distribution detection for pretrained transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1100–1111, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Haoqi Wang, Zhizhong Li, Litong Feng, and Wayne Zhang. 2022a. Vim: Out-of-distribution with virtuallogit matching. In *NeurIPS*, volume abs/2203.10807.
Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Dae ki Cho, and Haifeng Chen.
2018. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In *ICLR*
(Poster).
Yu Wang, Hanghang Tong, Ziye Zhu, and Yun Li. 2022b.
Nested named entity recognition: A survey. ACM
Trans. Knowl. Discov. Data, 16(6).
Xingjiao Wu, Luwei Xiao, Yixuan Sun, Junhang Zhang, Tianlong Ma, and Liang He. 2022. A survey of human-in-the-loop for machine learning. Future Generation Computer Systems, 135:364–381.
Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, and Ziwei Liu. 2022a. Openood: Benchmarking generalized out-of-distribution detection.
NeurIPS.
Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. 2021. Generalized out-of-distribution detection:
A survey. *arXiv preprint arXiv:2110.11334*.
Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, and Yue Zhang. 2022b. Glue-x: Evaluating natural language understanding models from an out-ofdistribution generalization perspective.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level Convolutional Networks for Text Classification. *arXiv:1509.01626 [cs]*.
Yin Zhang, Rong Jin, and Zhi-Hua Zhou. 2010. Understanding bag-of-words model: A statistical framework. International Journal of Machine Learning and Cybernetics.
Yuhang Zhang, Weihong Deng, and Liang Zheng. 2023.
Unsupervised evaluation of out-of-distribution detection: A data-centric perspective.
Zijun Zhang. 2018. Improved adam optimizer for deep neural networks. In *2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)*,
pages 1–2.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen.
2023. A survey of large language models. *CoRR*,
abs/2303.18223.
## A Training Details B Evaluation Details
Dataset Accuracy F1 Score Precision Recall
NC/I 82.4±0.1 81.8±0.1 81.7±0.2 82.0±0.2
Computer 89.2±0.3 89.3±0.4 89.3±0.4 89.3±0.3 Politics 94.7±0.3 94.6±0.3 94.6±0.4 94.7±0.3 Sports 97.5±0.2 97.5±0.2 97.5±0.2 97.5±0.2
IMDB 94.7±0.1 94.7±0.1 94.7±0.1 94.7±0.1 SST-2 93.9±0.1 93.8±0.1 93.7±0.1 93.8±0.1 Yelp 96.9±0.0 96.9±0.0 96.9±0.0 96.9±0.0
Each model was trained on five different seeds from range [2021, 2025]. Table 4 includes averaged classification metrics with standard deviation.
Table 4: Training metrics on test set.
The values for all metrics that were considered in our experiments are listed below. Tables 5 and 6 refer to the Scenario 1 of OOD data preparation; Tables 7 and 8 report the results from the Scenario 2.
Table 5: AUPR-IN (%) and standard deviations for methods evaluated on datasets from first scenario.
Method NC/O Twitter IMDB SST-2 Yelp Language NCR/I NCR/O
MSP 71.7±0.4 97.3±0.3 98.4±1.5 91.9±1.8 97.4±0.8 99.2±0.3 59.2±1.1 80.4±0.7 Energy 74.5±0.6 98.5±0.2 99.8±0.2 96.3±1.3 99.2±0.4 99.8±0.1 58.8±1.3 84.0±0.9
GradNorm 73.9±0.7 98.2±0.3 99.5±0.6 95.4±1.1 98.8±0.4 99.7±0.1 58.5±1.9 83.8±1.0 KL-Matching 51.0±0.4 90.9±0.7 94.1±5.1 72.0±2.8 87.7±3.6 96.6±1.3 48.3±0.2 54.8±0.5
ReAct 74.3±0.5 98.4±0.2 99.8±0.2 96.1±1.4 99.2±0.4 99.8±0.1 58.9±1.4 83.7±0.9 DICE 51.8±0.7 96.0±0.4 91.0±2.6 82.6±0.9 94.1±0.9 95.0±0.5 51.0±1.0 56.7±1.5
KNN 78.5±0.1 99.3±0.1 99.9±0.0 98.3±0.5 99.8±0.1 99.9±0.0 68.9±1.3 88.6**±0.6**
VIM 77.1±0.1 99.0±0.2 96.5±1.1 98.1±0.5 97.1±0.6 99.7±0.1 58.9±0.9 85.5±0.4
Table 6: FPR@95 (%) and standard deviations for methods evaluated on datasets from first scenario. Lower scores indicate better performance.
Method NC/O Twitter IMDB SST-2 Yelp Language NCR/I NCR/O
MSP 82.3±0.8 77.3±4.8 19.6±18.8 61.3±7.8 21.5±6.7 29.3±10.7 91.3±0.5 75.2±1.4 Energy 75.2±1.0 55.7±7.3 2.4±2.7 35.8±10.5 7.1±3.5 7.6±3.5 89.0±0.6 63.8±1.9 GradNorm 75.9±0.8 65.1±6.9 5.7±6.3 44.0±7.9 11.2±4.0 12.9±5.0 88.8±0.7 63.7±2.1
KL-Matching 85.8±0.5 85.4±3.5 33.8±29.7 76.4±4.6 30.2±8.7 55.7±9.2 92.3±0.3 80.2±0.8 ReAct 75.3±1.1 55.3±7.2 2.2±2.5 35.6±10.6 7.0±3.4 7.6±3.6 89.2±0.6 64.2±1.9 DICE 95.2±0.3 99.9±0.0 100.0±0.0 99.9±0.1 99.4±0.9 100.0±0.0 96.3±0.3 97.1±0.5
KNN 73.9±0.6 34.4±5.4 0.2±0.1 22.1±8.6 2.2±0.7 1.4±0.5 85.7±0.7 56.1**±1.6**
VIM 71.5**±0.5** 57.8±4.7 86.5±12.4 23.7±5.2 63.3±10.7 13.2±8.0 88.9±0.5 63.4±1.1
Table 7: AUPR-IN (%) and standard deviations for methods evaluated on datasets from second scenario. The first part of the table refers to semantic shift, where the second part refers to background shift.
ID OOD MSP Energy GradNorm KLM ReAct DICE KNN VIM
Computer Politics 95.2±1.1 97.7±0.7 97.4±0.5 77.7±8.2 97.6±0.7 56.1±11.4 98.2±0.3 99.1**±0.2**
Sports 93.3±1.9 96.4±1.1 96.0±1.0 71.3±5.5 96.2±1.2 64.3±9.0 97.1±0.6 98.3**±0.4**
Politics Computer 93.8±0.7 94.8±0.7 94.6±0.7 67.3±9.0 94.7±0.7 68.9±2.2 96.7±0.2 97.9**±0.2**
Sports 91.6±1.2 92.8±1.0 92.4±1.2 60.8±9.6 92.6±1.1 67.5±1.9 95.8±0.3 97.1**±0.3**
Sports Computer 96.3±0.7 96.9±1.1 97.1±0.5 70.1±7.4 96.8±1.1 67.2±6.4 98.0±0.3 98.7**±0.2**
Politics 96.6±0.4 97.1±0.9 97.4±0.5 75.3±1.7 97.1±0.9 66.0±9.7 98.2±0.2 98.6**±0.1**
IMDB SST-2 86.2±1.4 84.8±1.8 73.7±6.6 52.2±1.2 85.0±1.8 85.5±3.6 98.1**±1.0** 97.6±0.6
Yelp 82.1±2.8 80.8±3.6 71.5±3.3 38.8±0.6 81.2±3.9 51.4±8.0 97.9±0.8 98.6**±0.6**
SST-2 IMDB 85.7±1.5 85.1±2.0 69.4±3.0 48.6±1.4 85.7±2.2 41.1±5.0 91.4**±0.8** 86.5±2.5
Yelp 76.3±2.8 75.4±3.5 60.5±3.5 47.4±1.5 76.3±3.8 40.6±3.6 91.4**±0.4** 82.5±2.6
Yelp IMDB 83.5±0.5 82.7±2.5 76.1±2.3 41.1±0.5 83.0±2.4 36.8±1.6 88.2±0.6 91.2**±0.5**
SST-2 93.8±0.4 93.7±0.7 88.8±0.8 50.2±1.6 93.9±0.7 63.3±8.2 98.9±0.2 99.3**±0.1**
Table 8: FPR@95 (%) and standard deviations for methods evaluated on datasets from second scenario. The first part of the table refers to semantic shift, where the second part refers to background shift. Lower scores indicate better performance.
ID OOD MSP Energy GradNorm KLM ReAct DICE KNN VIM
Computer Politics 55.9±11.7 20.9±7.4 31.0±8.4 61.6±12.6 21.3±7.5 99.9±0.1 17.3±4.6 7.2**±1.8**
Sports 61.4±9.3 30.8±8.8 39.7±8.7 66.7±9.6 31.4±8.6 99.1±0.9 29.6±6.3 14.1**±5.6**
Politics Computer 38.4±8.9 22.0±4.0 28.1±8.9 42.1±9.8 22.7±4.1 98.8±0.9 22.8±4.6 9.4**±1.6**
Sports 55.8±7.4 35.6±4.7 42.5±9.4 59.4±7.8 36.5±4.7 99.4±0.5 35.8±5.0 16.2**±3.1**
Sports Computer 27.9±6.6 18.1±5.7 18.2±5.1 32.2±5.1 18.8±6.0 96.0±2.3 11.8±4.2 6.0**±1.6**
Politics 30.5±3.6 21.0±2.8 21.0±2.9 33.9±2.3 21.7±3.3 95.5±7.4 17.7±2.6 9.1**±1.2**
IMDB SST-2 65.6±0.9 68.5±9.4 67.3±1.6 65.6±0.9 69.1±10.7 54.4±9.6 12.5**±8.2** 14.0±3.9
Yelp 92.3±1.3 92.8±2.8 93.3±1.0 92.3±1.3 92.6±2.9 93.8±7.2 15.1±9.1 8.2**±6.5**
SST-2 IMDB 77.7**±2.3** 79.6±7.3 81.8±1.5 78.0±2.3 79.1±8.9 100.0±0.0 88.3±10.9 79.0±17.8
Yelp 84.5±2.4 85.8±6.2 87.6±1.2 84.8±2.3 85.2±7.0 99.7±0.2 81.0**±9.0** 82.2±13.1
Yelp IMDB 83.7±0.6 83.5±1.4 87.1±0.7 83.6±0.6 83.2±1.4 99.7±0.1 74.9±3.8 62.4**±2.5**
SST-2 58.4±2.4 58.6±4.0 66.7±2.5 58.4±2.4 57.9±4.2 96.1±4.7 3.8±1.5 2.4**±0.8**
|
nagasawa-etal-2023-lms | Can {LM}s Store and Retrieve 1-to-N Relational Knowledge? | https://aclanthology.org/2023.acl-srw.22 | It has been suggested that pretrained language models can be viewed as knowledge bases. One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational knowledge, such as {''}country and its capital,{''} with high memorization accuracy. On the other hand, world knowledge includes not only 1-to-1 but also 1-to-N relational knowledge, such as {''}parent and children.{''}However, it is not clear how accurately language models can handle 1-to-N relational knowledge. To investigate language models{'} abilities toward 1-to-N relational knowledge, we start by designing the problem settings. Specifically, we organize the character of 1-to-N relational knowledge and define two essential skills: (i) memorizing multiple objects individually and (ii) retrieving multiple stored objects without excesses or deficiencies at once. We inspect LMs{'} ability to handle 1-to-N relational knowledge on the controlled synthesized data. As a result, we report that it is possible to memorize multiple objects with high accuracy, but generalizing the retrieval ability (expressly, enumeration) is challenging. | # Can Lms Store And Retrieve 1-To-N Relational Knowledge?
Haruki Nagasawa1Benjamin Heinzerling2,1Kazuma Kokuta1**Kentaro Inui**1,2 1Tohoku University 2RIKEN
{haruki.nagasawa.s8, kokuta.kazuma.r3}@dc.tohoku.ac.jp [email protected] [email protected]
## Abstract
It has been suggested that pretrained language models can be viewed as knowledge bases. One of the prerequisites for using language models as knowledge bases is how accurately they can store and retrieve world knowledge. It is already revealed that language models can store much 1-to-1 relational knowledge, such as "country and its capital," with high memorization accuracy. On the other hand, world knowledge includes not only 1-to-1 but also 1to-N relational knowledge, such as "parent and children." However, it is not clear how accurately language models can handle 1-to-N relational knowledge. To investigate language models' abilities toward 1-to-N relational knowledge, we start by designing the problem settings. Specifically, we organize the character of 1-to-N relational knowledge and define two essential skills: (i) memorizing multiple objects individually and (ii) retrieving multiple stored objects without excesses or deficiencies at once.
We inspect LMs' ability to handle 1-to-N relational knowledge on the controlled synthesized data. As a result, we report that it is possible to memorize multiple objects with high accuracy, but generalizing the retrieval ability (expressly, enumeration) is challenging.
## 1 Introduction
As a result of their pretraining on large amounts of text, language models (LMs) store certain world knowledge facts, such as "Paris is the capital of France", in their parameters and can retrieve that knowledge when given a suitable prompt. Since the ability to store and retrieve knowledge is also a key functionality of knowledge bases (KBs; Weikum et al., 2021), prior work has proposed to view language models as knowledge bases (Petroni et al.,
2019). Quantitative evaluation of world knowledge in LMs has focused on 1-to-1 relational knowledge involving two entities, such as a country and its capital (Petroni et al., 2019; Heinzerling and Inui, 2021; Safavi and Koutra, 2021; Razniewski et al.,
![0_image_0.png](0_image_0.png)
Figure 1: Memorize and enumerate relational knowledge. We are considering a synthetic setting in which the LM is made to memorize a specific set of individual relations and then needs to aggregate those relations into 1-to-N relations.
2021). However, the question if and how well LMs can handle 1-to-N relations, such as relations between parents and their children, is underexplored so far.
Here, we conduct a study to assess the capability of LMs to store and retrieve 1-to-N relations in a manner similar to knowledge bases. We consider a setting in which the model first is trained to memorize individual relation instances, such as
"Tom has a child named Emma", "Bob has a child named Ava", "Tom has a child named Lucas", and
"Tom has a child named Olivia". During inference the model then has to retrieve 1-to-N relation, e.g.,
"Tom has children named Emma, Lucas, Olivia"
(Figure 1).
To investigate the possibility of viewing LMs as KBs more precisely, it is necessary to clarify the basic abilities of LMs, such as how accurately they can store 1-to-N relational knowledge and how flexibly they can retrieve multiple entities they have stored.
## 130
Our study represents the first comprehensive investigation of 1-to-N relational knowledge. Our contributions are summarized as follows: (1) We identified the capabilities necessary for LMs to handle 1-to-N relational knowledge, taking into account its unique properties. Specifically, LMs must be able to accurately memorize any object appearing discretely and enumerate multiple objects without over- or under-recall based on memory. (§ 3) (2)
Based on the identified capabilities, we formulated two training schemes: element-valued supervision for "memorization" and set-valued supervision for
"enumerating." (§ 4) (3) We conducted a quantitative evaluation of LMs' "memorization" abilities from both subject-oriented and object-oriented perspectives and categorized the errors encountered during "enumerating." Our results suggest that LMs are able to store 1-to-N relational knowledge with reasonable accuracy, but generalizing the ability to enumerate proves to be challenging. (§ 6)
## 2 Related Work
Factual knowledge probing Petroni et al.
(2019) investigated how much knowledge LMs had acquired from large corpora by having models such as pretrained BERT (Devlin et al., 2019) solve problems in the "fill-in-the-blank" format. They also pointed out three critical advantages of treating LMs as KBs: "LMs require no schema engineering, do not need human annotations, and support an open set of queries."
Jiang et al. (2020) and Brown et al. (2020) also worked on creating optimal prompts for extracting correct answers from pretrained LMs. These investigations aim to extract knowledge that LMs have acquired implicitly during pretraining. On the other hand, we are interested in the degree to which knowledge can be handled accurately when LMs explicitly learn it. Thus, investigating what and how well pretrained LMs acquire 1-to-N relational knowledge from corpora is beyond our scope.
Storing 1-to-1 relational knowledge Heinzerling and Inui (2021) established two basic requirements for treating LMs as KBs: "(i) the ability to store a lot of facts involving a large number of entities and (ii) the ability to query stored facts." Based on these requirements, they elaborately examined how much and how accurately LMs can store 1-to1 relational knowledge by comparing various entity representations. However, the behavior of LMs concerning 1-to-N relational knowledge remains Set handling This study explores handling multiple objects, which can be achieved by handling a set of objects. Previous works such as Deep Sets
(Zaheer et al., 2017) and Set Transformer (Lee et al., 2019) are representative ones that address set handling in neural networks or transformers
(Vaswani et al., 2017).
Both focus on sets as inputs, being permutationinvariant and treating sets of arbitrary size. While this study focuses on sets as outputs rather than inputs, the properties such as permutation-invariant are considered to be essential aspects in common.
## 3 Designing An Approach To 1-To-N Relational Knowledge
In this section, we describe the unique properties of 1-to-N relational knowledge and what capabilities of LMs are needed to handle 1-to-N relational knowledge.
To begin with, we define three significant unique factors that make 1-to-N relational knowledge challenging to deal with: First, when the subject or relation under consideration changes, the number of objects associated with it changes. For example, consider answering the question, "{Subject} has children named <mask>." The difficulty is that the number of correct objects changes depending on the input. Second, considering existing corpora, multiple objects are likely to occur discretely. For example, Barack Obama has two children, Malia and Sasha, but only Malia may appear in some specific contexts, and only Sasha may appear in other contexts.. Finally, third, when we assume a situation where an LM is used practically as a KB,
it is necessary to output these discretely appearing objects together to avoid generating an inadequate response to the input query.
Therefore, given the above properties, the two essential LMs' competencies considered necessary to manage 1-to-N relational knowledge are as follows. (i) "the ability to accurately memorize any objects appearing discretely." (ii) "the ability to retrieve multiple objects without over- or underrecall based on memory." In order to consider an end-to-end approach to 1-to-N relational knowledge, this study tackles it as a generative task using the sequence-to-sequence model (Sutskever et al.,
2014), which allows for flexible responses based on input.
![2_image_0.png](2_image_0.png)
## 4 Method 4.1 Terminology
In this work, we make use of the following terms:
Relation triple: A triple consisting of a *subject* and an *object* entity, as well as a predicate that describes the relation that holds between the subject and the object, e.g., (Tom, hasChild, Emma).
1-to-N relation: A set of relation triples with the same subject and predicate, but different objects, e.g., (Tom, hasChild, Emma) and (Tom, hasChild, Lucas).
Individual relation instance: A relation triple expressed in text, for example "Tom has a child named Emma."
Element: Viewing a 1-to-N relation as a set, we refer to individual relation instances as *elements* of that set, e.g., "Tom has a child named Emma." is an element of the 1-to-N relation that holds between Tom and his children.
Element-valued supervision: One of the two supervised training schemes we employ. A model is trained on elements, i.e., individual relation instances, of 1-to-N relations. Concretely, the model is given a relation instance with the object masked out, e.g., "Tom has a child named <mask>." and has to predict the masked out object, e.g., "Emma".
The goal of this training scheme is to have the model memorize individual objects based on their corresponding subjects.
Set-valued supervision: In the second of our supervised training schemes the model is trained to predict the set of all objects for a given subject and predicate, e.g., given "Tom has children named <mask>.", the model has to generate the text "Emma, Lucas, Olivia".
Table 1: Templates: We used different templates for each model to fit each pretraining setting.
| Parent-children | Director-titles | |
|----------------------------|-----------------------------------------|-----------------------------------------------|
| Element-valued supervision | {Sbj} has a child named <mask>. | {Sbj} directed a film titled <mask>. |
| Set-valued supervision | {Sbj} has children named <mask>. | {Sbj} directed following movies: <mask>. |
| Element-valued supervision | What is the name of {Sbj}'s child? | What movie did {Sbj} direct? |
| Set-valued supervision | What are the names of {Sbj}'s children? | What are the titles of movies {Sbj} directed? |
## 4.2 Handling Of 1-To-N Relational Knowledge
We investigate the behavior of LMs for 1-to-N relational knowledge when explicitly trained. Specifically, we use the sequence-to-sequence model to generate variable-length responses to inputs.
As described in § 3, the two abilities necessary for LMs to handle 1-to-N relational knowledge are
(i)memorizing multiple discretely appearing objects and (ii)enumerating memorized objects without excess or deficiency. In this section, we conduct two experiments, each corresponding to the essential abilities.
(i) Memorization The first experiment is aimed at "memorization" through element-valued supervision. Here, 1-to-N relational knowledge is decomposed into a one-to-one form, and we train LMs to memorize multiple objects individually. In the learning process, one object is output in response to an input for a particular subject, and then all objects will be memorized in this fashion. Therefore, the state in which the LMs memorize all N objects can also be paraphrased as the state in which the LMs can output all N objects.
Therefore, the evaluation of whether LMs memorized multiple objects is checked by generating multiple sequences using beam-search. Specifically, N sequences are generated for a subject using the same query as the training data. By checking how many correct objects are included in the sequences, we evaluate how many objects the LMs memorized.
(ii) Enumeration The second experiment attempts to acquire "the ability to enumerate memorized objects." Here, training by set-valued supervision is performed in conjunction with memorization by element-valued supervision. The reason for using the two supervisory methods together is the premise that to enumerate multiple objects, it is necessary to memorize them in the first place.
Although it is possible to perform element-valued supervision and then shift to set-valued supervision, catastrophic forgetting of memorized objects may occur during the training of set-valued supervision.
Indeed, we have confirmed that catastrophic forgetting of memorized objects occurs during set-valued supervision, so in this paper, the two supervisory methods are used together. For some subjects in the training data, LMs explicitly learn the behavior of enumerating the objects in response to queries that explicitly ask for multiple objects. We then test whether set-valued supervision allows LMs to enumerate objects for other subjects as well, i.e.,
whether they can generalize the ability to enumerate.
## 5 Experimental Setup 5.1 Synthetic Data
In the following experiments, we uniquely prepared the 1-to-N dataset to measure how well LMs can accurately store plenty of facts. Specifically, we randomly obtained canonical names of parents and their two to four children from Wikidata (Vrandeciˇ c´
and Krötzsch, 2014). We also randomly obtained the canonical names of directors and their two to four representative films from IMDb Datasets1.
Therefore, by preparing 1-to-2, 1-to-3, and 1-to-4 relational knowledge, we will observe how LMs performance changes as the number of objects increases. We only collected data that meets the following conditions.
- To ensure that all entities are distinguishable, there is no data with the same canonical name across both subjects and objects.
- Only entities consisting of four or fewer words separated by spaces or hyphens are used to adjust for storing difficulty due to word length.
We only consider memorizing and enumerating entities which appear in the training data.
1https://www.imdb.com/interfaces/
![4_image_0.png](4_image_0.png)
Director-titles: objs covered ratio
![4_image_1.png](4_image_1.png)
## 5.2 Models And Training Settings
We used the pretrained BART-base (Lewis et al.,
2020) and T5-base (Raffel et al., 2019) as the sequence-to-sequence model in the experiments.
The training in the two experiments described below (§ 6.1 and § 6.2) was continued until the models strongly overfit the training data. Precisely, we continued training until the accuracy of the training data no longer improved by more than 30 epochs.
The accuracy was calculated as follows: for element-valued supervision, the accuracy was determined by whether the model could generate the correct object for each subject in the input. If the model generated one of the correct N objects for each subject, it was considered correct; otherwise, incorrect. For set-valued supervision, the accuracy was determined by whether the model generated a set of multiple correct objects with no omissions or additions. If the model generated a complete set of correct objects, it was considered correct; otherwise, incorrect.
As detailed training settings, the learning rate was started at 5e-5 in common with BART and T5, and it was reduced by half if the accuracy did not improve by more than three epochs. The batch size was varied according to the model and training data size/domain. AdamW (Loshchilov and Hutter, 2019) was commonly used as the optimizer. In addition, a different template was used for each model so that the input sentence templates were similar to the pretraining settings for each (BART
uses <mask> token in pretraining, but T5 does not.)
The templates used are listed in Table 1.
## 6 Experiments 6.1 Element-Valued Supervision
In the first experiment, we investigated the ability to memorize multiple objects using element-valued supervision. Here, we tested whether the LMs could correctly store N objects associated with a single subject. Specifically, as shown in Figure 2, the learning process of having one object generated for each input sentence, such as "{Subject} has a child named <mask>." or "{Subject} directed a film titled <mask>." was performed for all objects.
Thus, the learning setup is such that there are as many target sentences as objects for each input sentence.
Parent-children
1-to-2 46.7 45.8 49.3 27.0 40.7 **49.5**
1-to-3 8.33 9.33 9.67 10.7 16.8 **20.7**
1-to-4 1.00 1.33 2.17 0.500 2.33 **2.67**
Director-titles
1-to-2 42.0 43.3 **44.17** 19.8 24.2 28.7
1-to-3 22.5 24.2 **26.3** 14.8 15.8 23.7
1-to-4 6.17 10.7 **11.3** 2.33 3.83 7.00
We then checked the degree to which LMs trained with element-valued supervision could recall multiple objects through the generation of N
sequences using beam search. To be precise, N
was for the number of objects associated with the input subject, and we analyzed the count of correct objects within those sequences.
In this experiment, we also tested whether the LMs' memorization accuracy changed when the training data size, i.e., the number of entities, was varied. Here, we evaluated this memorization accuracy from two perspectives.
## Object-Oriented Memorization Accuracy The
first perspective is object-oriented memorization accuracy, shown in Figure 3, which evaluates the degree of recall of objects in the training data. Figure 3a and 3b correspond to the parent-children and director-titles datasets, respectively. The solid blue line corresponds to T5, and the dashed yellow line to BART, with darker colors corresponding to 1toN relational knowledge with more objects.
The results show that T5 has better memorization accuracy than BART, although no significant differences by data domain were observed. Also, the larger N, i.e., the greater the number of objects associated with one subject, the more likely N entities could not be memorized.
Subject-oriented memorization accuracy The second perspective, subject-oriented memorization accuracy, evaluated how many subjects were memorized with all related N objects. Specifically, in generating multiple objects by beam search, we show how many subjects existed for which all N
objects were generated.
The results are shown in Figure 4, where 4a and 4b correspond to the parent-children and director-title datasets, respectively, as in Figure 3.
The results confirmed that, overall, T5 has higher memorization accuracy. Looking at performance
Model BART-base T5-base
Set-valued supervision ratio 30% 60% 90% 30% 60% 90%
by the number of objects, it is clear that, in common with the two data domains and two models, the greater the number of objects, the more difficult it was to remember all of them in conjunction with the subject.
Interestingly, both memorization accuracies in the two perspectives show roughly independent behavior concerning data size. One possible reason for the higher overall memory accuracy of T5 is that the parameter size of the T5-base is about 1.5 times larger than that of BART-base. This may contribute to higher memory accuracy. The fact that 100% memorization accuracy was not achieved for either data size may suggest that memorizing 1-to-N relational knowledge is not easy for LMs.
Examples of LMs' predictions are shown in Table 3.
## 6.2 Element-Valued And Set-Valued Supervision
In this subsequent experiment, the model was trained with element-valued and set-valued supervision to acquire the ability to enumerate all associated objects. More expressly, compared to the first experiment, we additionally employed set-valued supervision, which involved using "{Subject} has children named <mask>." as the input sentence and "{Object1}, {Object2}, ..." as the corresponding target sentence, as an example. This approach aimed to generalize the model's ability to enumerate all accurately memorized objects in response to queries requesting multiple objects.
We conducted both element-valued and setvalued supervision during training. Specifically, we trained LMs using element-valued supervision on all subjects to memorize all associated objects.
We fixed the training data size at 3000 subjects for each. Simultaneously, we randomly selected 20%
of the subjects, i.e, 600 subjects, as a test set for set-valued supervision. For the remaining 80% of
| Data Domain | 1-to-N | Subject | Gold objects | Top-N sequences BART 1: Hood Surgeon 2: Truice Young 3: Young Hood Surgeon T5 1: Hood Surgeon 2: Truice Young 3: La Tanya Danielle Young |
|-----------------------|----------|-------------|-------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Hood Surgeon | | | | |
| Parent-children | 1-to-3 | Dr. Dre | La Tanya Danielle Young Truice Young | BART 1: Escape to Witch Mountain 2: A Dream for Christmas 3: The Wild Country T5 1: Escape to Witch Mountain 2: A Dream for Christmas 3: Adventures in Dinosaur City |
| A Dream for Christmas | | | | |
| Director-titles | 1-to-3 | Jack Holton | Escape to Witch Mountain The Wild Country | |
the subjects, we varied the proportion of subjects for which set-valued supervision was applied (i.e.,
30%, 60%, or 90%) to examine whether the generalization ability would change depending on the number of instances that the LMs learned how to enumerate their corresponding objects.
The goal was to investigate how well the model could generalize to subjects in the test set when using set-valued supervision and to determine the impact of varying the proportion of subjects with set-valued supervision on model performance.
The results (Table 2) show that the enumerating accuracy is highest when the supervision ratio is 90% for all, indicating that it is important to have many training instances to generalize the enumerating capability.
Although there are differences in the enumerating accuracy scores across data domains and models, we found a tendency for the enumeration performance to decrease significantly as the number of target words increases.
Error analysis Quantitative error distributions are shown in Table 4, and specific examples of incorrect answers are shown in Table 5. Table 4 shows that for small numbers of objects (e.g., 1to-2), BART tended to generate incorrect objects
(labeled "Incorrect"), while T5 often duplicated the same object (labeled "Duplication"), highlighting a noticeable difference between the two models.
As the number of objects increased (e.g., 1-to-3, 1to-4), both models were more likely to produce wrong answers due to missing objects (labeled
"Missing"). The distribution of errors across different datasets was generally similar, but both models were more prone to missing objects in the parentchildren dataset, suggesting that the type of entity names might have an impact on the error patterns.
## 7 Conclusion
We addressed handling 1-to-N relational knowledge by a generative approach using the sequenceto-sequence model. Since little work has been done on 1-to-N relational knowledge in previous studies, we started by organizing the properties of 1-to-N
relational knowledge and setting up the capabilities considered necessary for LMs based on these properties.
Specifically, we defined two essential capabilities: "memory of discretely appearing multiple objects" and "enumeration of objects based on memory." Then, we developed training schemes based on these perspectives. We used element-valued supervision and beam search for the former to memorize and evaluate multiple objects. We found that nearly 90% of the objects could be memorized, although we observed a tendency for memory omissions to occur as the number of objects increased.
However, we also confirmed that it is challenging to achieve 100% perfect memory.
For the latter, we attempted to generalize "enu-
| Model | BART-base | T5-base | | | | | |
|-----------------|-------------|-----------|-------------|-----------|---------|-------------|-----|
| Error Type | Incorrect | Missing | Duplication | Incorrect | Missing | Duplication | |
| 1-to-2 | 280 | 0 | 18 | 154 | 2 | 147 | |
| Parent-children | 1-to-3 | 229 | 306 | 7 | 93 | 287 | 96 |
| 1-to-4 | 175 | 406 | 6 | 105 | 380 | 99 | |
| 1-to-2 | 298 | 0 | 37 | 156 | 1 | 271 | |
| Director-titles | 1-to-3 | 70 | 352 | 20 | 41 | 287 | 130 |
| 1-to-4 | 25 | 481 | 25 | 37 | 441 | 80 | |
Table 5: Examples of enumerating error for the parent-children dataset. The error part is colored in red. These errors are for 1-to-3 relational knowledge and were generated by the T5, which is trained with 90% set-valued supervision.
| Error | Subject | Gold and Prediction |
|-------------------|---------------|----------------------------------------------------------------------------------------------------------|
| Missing | Jeb Bush | Gold: George P. Bush, Noelle Bush, John Bush Jr. Pred: John Bush Jr., Noelle Bush (missing) |
| Incorrect | Shimon Peres | Gold: Tsvia Walden, Hemi Peres, Yoni Peres Pred: Tsvia Walden, Yoni Peres, Leo Peres |
| Duplication | Alice Meynell | Gold: Viola Meynell, Everard Meynell, Madeline Lucas Pred: Viola Meynell , Madeline Lucas, Viola Meynell |
| Excess(Incorrect) | Alan Alda | Gold: Beatrice Alda, Elizabeth Alda, Eve Alda Pred: Elizabeth Alda, Beatrice Alda, Eve Alda, Nanna Alda |
meration ability" by set-valued supervision in conjunction with memorization by element-valued supervision. The results showed that learning more data improved the generalization performance for acquiring enumeration ability. However, we also observed the LM's behavior, which aligns with human intuition: the more objects increase, the more difficult it becomes to enumerate all of them correctly. Notably, the generalization performance for 1-to-2 relational knowledge was only about 50%
for the test set, and for 1-to-4 relational knowledge, only about 10% generalization performance at most.
For our next steps, we are considering the following approach. The training setup of the current element-valued supervision is characterized by multiple target sentences for one input sentence, which is incompatible with the model's learning algorithm. Therefore, we would like to test a memorizing method using ordinal numerals such as first and second to distinguish each template for N objects. We would also like to investigate this memorization method's effect on the generalization performance of enumeration.
As for enumeration, which has been difficult to generalize, we would like to examine effective means of improving performance for a small number of objects. Specifically, we are considering adjusting the hyperparameters for text generation and verifying whether errors in enumerating will be reduced. After that, we would like to explore learning methods to enumerate N objects without needing hyperparameters adjustment in stages.
Introducing our 1-to-N problem setting into the LMs-as-KBs paradigm opens up many more intriguing challenges. While we investigated this setting under a controlled condition with a uniform frequency of object appearance, the frequency of each of the N objects in a corpus is likely to vary in reality. Furthermore, there may be multiple paraphrases expressing the same relation.
For example, in our study, we only considered the phrase "{Subject} has a child named {Object}."
but there are other phrases such as "{Subject}'s child is {Object}." or "{Object} is a daughter of
{Subject}." As a primary avenue for future research, we will explore whether LMs can handle 1-to-N
relational knowledge effectively under these more complex conditions.
## Acknowledgements
This work was supported by JSPS KAKENHI
Grant Number 21K17814 and JST CREST Grant Number JPMJCR20D2, Japan.
## References
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:*
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational Linguistics.
Benjamin Heinzerling and Kentaro Inui. 2021. Language models as knowledge bases: On entity representations, storage capacity, and paraphrased queries.
In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 1772–1791. Association for Computational Linguistics.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know. *Trans. Assoc. Comput. Linguistics*,
8:423–438.
Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, and Yee Whye Teh. 2019.
Set transformer: A framework for attention-based permutation-invariant neural networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pages 3744–3753. PMLR.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,*
ACL 2020, Online, July 5-10, 2020, pages 7871–7880.
Association for Computational Linguistics.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019,
New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2463–2473. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *CoRR*, abs/1910.10683.
Simon Razniewski, Andrew Yates, Nora Kassner, and Gerhard Weikum. 2021. Language models as or for knowledge bases. *CoRR*, abs/2110.04888.
Tara Safavi and Danai Koutra. 2021. Relational world knowledge representation in contextual language models: A review. *CoRR*, abs/2104.05837.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks.
In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Denny Vrandeciˇ c and Markus Krötzsch. 2014. Wiki- ´
data: a free collaborative knowledgebase. *Communications of the ACM*, 57(10):78–85.
Gerhard Weikum, Xin Luna Dong, Simon Razniewski, and Fabian M. Suchanek. 2021. Machine knowledge: Creation and curation of comprehensive knowledge bases. *Found. Trends Databases*, 10(2-4):108–490.
Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabás Póczos, Ruslan Salakhutdinov, and Alexander J. Smola. 2017. Deep sets. *CoRR*,
abs/1703.06114. |
imai-etal-2023-theoretical | Theoretical Linguistics Rivals Embeddings in Language Clustering for Multilingual Named Entity Recognition | https://aclanthology.org/2023.acl-srw.24 | While embedding-based methods have been dominant in language clustering for multilingual tasks, clustering based on linguistic features has not yet been explored much, as it remains baselines (Tan et al., 2019; Shaffer, 2021). This study investigates whether and how theoretical linguistics improves language clustering for multilingual named entity recognition (NER). We propose two types of language groupings: one based on morpho-syntactic features in a nominal domain and one based on a head parameter. Our NER experiments show that the proposed methods largely outperform a state-of-the-art embedding-based model, suggesting that theoretical linguistics plays a significant role in multilingual learning tasks. | # Theoretical Linguistics Rivals Embeddings In Language Clustering For Multilingual Named Entity Recognition
Sakura Imai1, Daisuke Kawahara1, Naho Orita1**, Hiromune Oda**2 1Waseda University 2The University of Tokyo [email protected], {dkw, orita}@waseda.jp [email protected]
## Abstract
While embedding-based methods have been dominant in language clustering for multilingual tasks, clustering based on linguistic features has not yet been explored much, as it remains baselines (Tan et al., 2019; Shaffer, 2021). This study investigates whether and how theoretical linguistics improves language clustering for multilingual named entity recognition (NER). We propose two types of language groupings: one based on morpho-syntactic features in a nominal domain and one based on a head parameter. Our NER experiments show that the proposed methods largely outperform a state-of-the-art embedding-based model, suggesting that theoretical linguistics plays a significant role in multilingual learning tasks.
## 1 Introduction
Language clustering has been used to facilitate an effective cross-lingual transfer for low-resource languages in various tasks, such as machine translation (Tan et al., 2019). While the majority of recent clustering approaches depend on embeddings from language models, linguistic knowledge has not yet been exploited enough. Previous studies have merely used descriptive typological features (Oncevay et al., 2020) and a coarse language family classification as baselines (Shaffer, 2021). We argue that there is large room for improvement in language clustering using linguistics knowledge.
This study examines two language classifications based on theoretical linguistics and tests their effectiveness in multilingual NER. Multilingual NER is selected because comparison models are available from Shaffer (2021), namely an embedding-based classification and a language family classification.
Although there are datasets available for NER in various languages (Tedeschi et al., 2021; Adelani et al., 2021; Rahimi et al., 2019), our study focuses on Indo-European languages because there is a rich body of research in theoretical linguistics.
139 Our classification approaches draw on morphosyntactic parameters proposed primarily in theoretical syntax. The first classification is based on a language tree created by Ceolin et al. (2021), which reflects various morpho-syntactic parameters in a nominal domain. The second classification uses the head parameter (Chomsky, 1981), which indicates the "head" of a phrase in relation to its complements. We select these parameters because NER is a task that identifies mentions and types of named entities that are mostly nouns.
We show that clustering languages based on such parameters results in more effective language groupings beyond the state-of-the-art embeddingbased method. Moreover, our clustering approaches demonstrate comparable or better performance than a model trained with all Indo-European languages (hence regardless of a substantial difference in the data size). These results suggest that theoretical linguistics has a promising potential in multilingual NLP tasks.
## 2 Related Work
In the current age of globalization, collecting information using various languages is getting more important than ever. Multilingual models have gained increasing attention for this purpose. Recently, pre-trained large-scale multilingual models using neural networks, such as Multilingual BERT (mBERT) (Devlin et al., 2019) and XLMRoBERTa (Conneau et al., 2020), have provided competitive results. However, the amount of labeled data available for fine-tuning these multilingual models is highly skewed toward "major" languages. In fact, there are more than 2,000 low-resource languages with little or no labeled data (Joshi et al., 2020).
To alleviate the problem with low-resource languages, cross-lingual transfer learning has been proposed (Artetxe and Schwenk, 2019). The aim of this method is to adapt a language model trained with high-resource languages to low-resource languages. Various transfer learning methods have been proposed. For example, Patil et al. (2022) proposed a technique using subword units (byte pair encoding (Sennrich et al., 2016)). Ri and Tsuruoka
(2022) investigated which conditions make crosslingual transfer learning possible by conducting artificial language experiments.
Language clustering is another kind of transfer learning method mainly used in machine translation. Tan et al. (2019) compared clustering by language family and by embeddings and reported that the embedding-based clustering better improved translation accuracy. Oncevay et al. (2020) proposed a language clustering method that integrates syntactic features of WALS (Dryer and Haspelmath, 2013) and embeddings from machine translation models. As for NER, Shaffer (2021) compared clustering by language family and by embeddings and reported that the embedding-based clustering outperformed language family clustering. In sum, clustering by linguistic prior was used as baselines, and these baselines did not attain better results than the ones with embeddings.
Other than language clustering, linguistic knowledge has been widely used in various NLP
tasks (O'Horan et al., 2016; Gerz et al., 2018; Ponti et al., 2019). For example, some approaches use typological or phylogenetic features in multilingual fine-tuning for cross-lingual transfer (Lin et al.,
2019; Pires et al., 2019; Dhamecha et al., 2021; de Vries et al., 2022). Likewise, language family information or typological features, such as word order, have been used in various kinds of multilingual tasks, such as machine translation (Saleh et al.,
2021; Chronopoulou et al., 2022), dependency parsing (Ammar et al., 2016), and pre-training (Fujinuma et al., 2022).
Crucially, however, the linguistic information used in all these studies is limited to the extent of language family and typological features which are directly observable. No studies using more profound linguistic knowledge have been conducted.
Therefore, it remains to be seen whether and to what extent linguistic knowledge other than linguistic family and typological features could help improve clustering for multilingual tasks.
## 3 Language Clustering Using Parameters Of Theoretical Linguistics 3.1 Linguistic Parameters
As shown in Section 2, multiple studies have attempted to use linguistic priors for multilingual NLP tasks. However, the knowledge used in these studies remains descriptive and unable to represent the internal nature of language.
Thus, we use "linguistic parameters" proposed by Chomsky (1981) in theoretical linguistics for our clustering to capture the characteristics of language that cannot be seen superficially and cannot be captured by phylogenetic comparison of languages. As seen in Sections 3.3 and 3.4, linguistic parameters are morpho-syntactically more detailed and abstract than typological features in WALS
that have been used in the previous studies. We apply these parameters to our clustering methods and conduct experiments on multilingual NER.
## 3.2 Selection Of Tasks And Languages
This study selects NER as the target task for comparison with Shaffer's (2021) study, which tried to improve the performance of multilingual NER
by clustering languages based on embeddings and language family.
We use 25 languages that belong to the IndoEuropean language family because there is a sufficient amount of annotated data available for NER,
and there is a rich body of literature in theoretical linguistics.
Table 1 lists the languages used in this study.
Each language is represented by its ISO 639-1 language code1, which is summarized in Appendix
(Table 10). In the previous study (Shaffer, 2021),
sub-families such as Celtic were not used, despite that their NER data are available. To conduct more comprehensive experiments, we select languages from a broader range of sub-families.
## 3.3 Clustering Based On Nominal Parameters
NER is a task that identifies and classifies entities in texts. Since the named entities are mostly represented as noun phrases, clustering languages by features related to a noun phrase would be effective for training. Thus, we focus on morpho-syntactic parameters that capture cross-linguistic similarities and differences in a nominal domain.
1http://www.infoterm.info/standardization/iso_
639_1_2002.php
| Sub-family | Languages | Shaffer (2021) |
|--------------|--------------------------------|------------------|
| Romance | ro, fr, es, pt, it, scn | fr, es, it |
| Germanic | af, nl, de, is, en, da, no, fo | de, en, da |
| Greek | el | - |
| Slavic | bg, pl, ru, sl, hr | ru |
| Indo-Iranian | ps, mr, hi | hi |
| Celtic | cy, ga | - |
Table 1: The languages used in this study and Shaffer
![2_image_1.png](2_image_1.png)
(2021).
To cluster languages by nominal parameters, we use a language tree proposed by Ceolin et al. (2021).
They classified Indo-European languages based on 94 morpho-syntactic parameters in a nominal domain. An example of nominal parameters, "grammaticalized gender" is shown in (1).
(1) a. il
$${\frac{\mathrm{il}}{\mathrm{the.MASC}}}\ {\frac{\mathrm{libro}}{\mathrm{book.MASC}}}$$
b. la
the.FEM
$$\underline{{{\underline{{{\mathrm{la}}}}}}}\qquad\qquad\underline{{{\mathrm{macchina}}}}$$
$$\mathbf{h}.$$
car.FEM
In languages such as Italian, the gender of definite articles varies depending on the gender of nouns as seen in (1a, 1b).
This parameter is just one example and many
other types of parameters are considered in (Ceolin et al., 2021): e.g., the presence/absence of the
definite article added to the relative clause and the
presence/absence of genitive markings using an
adposition. These parameters have often been discussed in theoretical syntax, but many of them are
not included in descriptive studies, such as WALS.
The relevant language tree is shown in Figure 1,
which was created by Ceolin et al. (2021) based on
the inter-lingual distances.2
To make clusters, we incrementally combine subfamilies close to each other in the language tree.
For example, to create 3 clusters, we first combine
2https://github.com/AndreaCeolin/Boundaries
| # | Sub-family |
|-----|-------------------------------------|
| 1 | Germanic, Slavic, Hellenic, Romance |
| 2 | Indo-Iranian |
| 3 | Celtic |
![2_image_0.png](2_image_0.png)
* [14] M. C.
Table 2: Clustering by Figure 1 (number of clusters: 3).
Figure 2: Head-initial (left) and head-final (right) of pre/postpositional phrase (PP).
Germanic and Slavic because they are close to each other in the tree (Figure 1). Hellenic and then Romance are merged into the German-Slavic group.
Celtic and Indo-Iranian remain as independent clusters. Table 2 summarizes these 3 clusters. For our experiments, the number of clusters is determined by the elbow method described in Section 4.2.
## 3.4 Clustering Based On The Head Parameter
To identify named entities in text, a language model may use contextual information surrounding the noun phrases. Since a noun phrase is often a part of a verb phrase as an object or a part of an adpositional phrase (i.e., a pre/postpositional phrase) that represents location, clustering languages by this kind of structural information may lead to a more effective clustering.
Based on this hypothesis, the same 25 IndoEuropean languages are clustered by the head parameter. The head parameter determines where the head (the "core" element) of a phrase is placed in the phrase structure. For example, in the case of a pre/postpositional phrase (PP), if it is head-initial, the head, i.e., the preposition (P), precedes the noun phrase (NP), and vice versa (see Figure 2).
The crucial difference from previous descriptive work such as WALS is that the word order of modifiers (e.g., adverbs for verbs and adjectives for nouns) is irrelevant, but the order of the head (e.g.,
V in VP) and its complement (e.g., NP for V in VP)
is crucial under the head parameter. This is different from the word order classifications in WALS,
where the order of the head is no more or less significant than that of modifiers and the notion of head is much less clear. Thus, the head parameter offers a simpler and more abstract framing of word order in a phrase, which crucially focuses on the position of the head and its complement in a phrase.
Table 3 shows the classification based on the head
| Head Parameter | Sub-Family | | |
|---------------------|--------------|---------|-----------|
| Mainly Head-Initial | Romance, | Slavic, | Germanic, |
| Greek, Celtic | | | |
| Mainly Head-Final | Indo-Iranian | | |
Table 3: Clustering based on the head parameter (number of clusters: 2).
## Parameter. 4 Ner Experiments
We conduct experiments on NER using the two clustering methods described in Section 3.
## 4.1 Experimental Setup
There are several datasets available for NER experiments, such as WikiNEuRal (Tedeschi et al., 2021)
and MasakhaNER (Adelani et al., 2021). Among them, we select the WikiAnn dataset3(Rahimi et al., 2019) because it has an extensive coverage of Indo-European languages, where these languages have been well-documented in theoretical linguistics. The WikiAnn dataset consists of Wikipedia articles for 176 languages that are automatically annotated with three types of named entities: LOC
(location), PER (person), and ORG (organization).
An overview of our experiments is shown in Figure 3. First, the training sets of all languages in a cluster are concatenated and fed into a pretrained language model for fine-tuning. We use XLM-RoBERTa-base4(Conneau et al., 2020) as the pre-trained language model. This model has 270M parameters and was trained on 2.5TB of CommonCrawl data in 100 languages. Then, the evaluation set of each language in the cluster is used to evaluate and calculate an F1 score. We perform this evaluation for each cluster using the seqeval framework (Nakayama, 2018) three times and calculate the mean F1 score and standard deviation. For all experiments, we set the batch size to 32, the maximum length of the input to 512, and the learning rate to 5e-5 and conduct three epochs of fine-tuning. We use NVIDIA V100 SXM2 on ABCI5as our computing resource, and the average time cost for fine-tuning is approximately one hour.
In our experiments, we select three classifications as baselines. The first is monolingual in which each language is taken as a single cluster.
The second is a clustering based on embeddings, and the last is Indo-European all languages (IE-all).
Since all the target languages shown in Table 1 are phylogenetically classified into the Indo-European family, using "language family" for clustering corresponds to using a single cluster consisting of all languages in this study.
## 4.2 Clustering Based On Embeddings
We use the embedding-based clustering method proposed by Shaffer (2021) for comparison. An overview of embedding-based clustering is shown in Figure 4.
First, a pre-trained language model is finetuned with a language identification task using the WikiAnn training sets. We trained XLM-RoBERTabase for 3 epochs, setting the batch size to 32, the random seed to 42, and the learning rate to 5e-5.
Following Shaffer (2021), we tried a single seed for this preliminary experiment. Language identification is the task of predicting which language the input text is written. We use all 25 languages in Table 1.
Next, each sentence in the WikiAnn validation sets is given to the fine-tuned XLM-RoBERTa model to obtain embeddings from the [CLS] tokens.
Based on the obtained embeddings, clustering is performed recursively by agglomerative clustering.
We then label the cluster for each input sentence and choose the most frequent cluster for each language among its sentences.
Table 4 shows the resulting clusters using 1,000 and 10,000 samples from the validation set for each language in the WikiAnn dataset. 1,000 and 10,000 are the maximum number of inputs from the validation sets, respectively. For languages that have the validation samples for less than the limits, all samples are used to obtain embeddings.
The optimal number of clusters is determined to be 3 by the elbow method (Thorndike, 1953)
when comparing with the clustering method using the nominal parameters described in Section 3 (see Section 5.1 for the experimental results with other numbers of clusters {2, 4, 5}). The elbow method is used to align our embedding-based method with Shaffer's (2021) study, to make a comparison with the clusterings by the nominal parameters. The number of clusters is aligned to 2 to generate clusters when compared with the clustering method using the head parameter.
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
| Languages | | |
|--------------------------------------------------------------------|-----------------|---------------------------------|
| # | 1,000 samples | 10,000 samples |
| 1 cy, ga, ps, mr, hi, ro, fr, bg, ga, ro, fr, es, pt, it, scn, pl, | | |
| pl, ru, sl, hr, af, nl, de, is, en, | sl, hr, de, en | |
| da, no, fo | | |
| 2 | es, pt, it, scn | mr, hi, ru, af, nl, is, da, no, |
| 3 el | cy, ps, el, bg | |
## 4.3 Results
Table 5 shows the comparisons in the NER evaluations of monolinguals and the clusterings using the nominal parameters, embeddings (1,000 and 10,000 samples), and all languages in Indo-
European family (IE-all). Table 6 shows the results with the head parameter.
We first compare the NER evaluations of the clusterings based on the morpho-syntactic parameters and embeddings. The NER evaluations using the nominal parameters (Table 5) show that the clustering by the nominal parameters is superior to that of by embeddings. More than 70% of all the target languages attained the better scores. The clustering based on the head parameter (Table 6) outperformed the embedding-based clusterings as well, achieving the best scores in 80% of the target languages.
We then compare our methods using morphosyntactic parameters with a model using all the Indo-European languages (IE-all). As for the number of languages that achieved the best score, 11 languages attained better scores with the clustering by the nominal parameters. This is slightly lower than the scores with the IE-all, which was 14 languages (Table 5). The clustering based on the head parameter scored the best in that approximately 70% of all the target languages outperformed the model with the IE-all (Table 6).
## 5 Analysis Quantitative Analysis 5.1
Our parameter-based methods significantly outperformed the embedding-based method as in Section
| 3 clusters | | | | | | |
|--------------|--------|-------|-------|-------|--------|--------|
| lang | #train | mono | noun | #1000 | #10000 | IE-all |
| cy | 10,000 | 91.09 | 91.57 | 91.73 | 92.42 | 92.95 |
| ga | 1,000 | 76.51 | 85.72 | 84.11 | 84.43 | 84.90 |
| ps | 100 | 0.00 | 55.92 | 54.68 | 53.32 | 52.22 |
| mr | 5,000 | 85.50 | 86.96 | 87.93 | 88.58 | 88.34 |
| hi | 5,000 | 86.06 | 86.89 | 89.18 | 89.90 | 89.47 |
| ro | 20,000 | 92.64 | 94.32 | 94.04 | 93.98 | 94.18 |
| fr | 20,000 | 88.99 | 91.04 | 90.74 | 90.53 | 91.05 |
| es | 20,000 | 89.19 | 91.51 | 91.34 | 90.52 | 91.63 |
| pt | 20,000 | 90.24 | 92.11 | 91.79 | 91.43 | 92.15 |
| it | 20,000 | 90.79 | 92.22 | 91.93 | 91.52 | 92.06 |
| scn | 100 | 1.18 | 80.08 | 75.58 | 77.12 | 81.04 |
| el | 20,000 | 90.07 | 91.21 | 90.40 | 90.07 | 91.04 |
| bg | 20,000 | 92.48 | 93.25 | 92.64 | 93.34 | 93.42 |
| pl | 20,000 | 89.86 | 91.34 | 91.12 | 91.22 | 91.43 |
| ru | 20,000 | 88.52 | 89.96 | 89.32 | 90.02 | 89.88 |
| sl | 15,000 | 93.02 | 93.89 | 93.65 | 93.88 | 93.86 |
| hr | 20,000 | 90.90 | 92.05 | 91.88 | 92.06 | 92.02 |
| af | 5,000 | 89.06 | 91.19 | 91.51 | 90.75 | 91.80 |
| nl | 20,000 | 90.64 | 92.59 | 91.74 | 92.17 | 92.49 |
| de | 20,000 | 87.47 | 88.59 | 88.13 | 88.31 | 88.70 |
| is | 1,000 | 73.98 | 87.54 | 86.75 | 87.44 | 88.29 |
| en | 20,000 | 82.27 | 84.12 | 84.22 | 83.97 | 84.01 |
| da | 20,000 | 91.73 | 93.15 | 92.59 | 93.03 | 93.04 |
| no | 20,000 | 91.98 | 93.32 | 93.14 | 93.24 | 93.49 |
| fo | 100 | 0.00 | 86.61 | 86.35 | 87.44 | 87.69 |
4.3. This suggests that the parameters in theoretical linguistics have a yet-to-be-explored potential in multilingual NLP. This section provides some more detailed analysis that supports this claim.
Clustering results First, we observe some unstable results in the embedding-based clustering.
Table 4 shows that the resulting clusters greatly differ depending on the number of samples used to obtain embeddings. Thus, the embedding-based clustering could lead to inconsistent results and may not always be the most effective method.
The elbow method Moreover, we found that the optimal number of clusters determined by the elbow method did not result in the best performance in the embedding-based approach. For example, while the elbow method identified 3 clusters as optimal, the best scores were obtained when the number of clusters was 5 with 10,000 samples. This indicates that the optimal number of clusters obtained by the elbow method may not always be the most effective one, at least in NER.6 Thus, we examine the results with different numbers of clusters. In partic-6Shaffer (2021) also used the elbow method to determine the number of clusters (which was 4) but their experiments did not test other numbers of clusters.
| 2 clusters | | | | | | |
|--------------|--------|-------|-------|--------------|--------|-------|
| lang | #train | mono | head | #1000 #10000 | IE-all | |
| cy | 10,000 | 91.09 | 93.15 | 92.22 | 91.88 | 92.95 |
| ga | 1,000 | 76.51 | 85.37 | 84.11 | 84.38 | 84.90 |
| ps | 100 | 0.00 | 55.92 | 55.31 | 55.02 | 52.22 |
| mr | 5,000 | 85.50 | 86.96 | 88.71 | 88.29 | 88.34 |
| hi | 5,000 | 86.06 | 86.89 | 89.48 | 89.42 | 89.47 |
| ro | 20,000 | 92.64 | 94.43 | 94.04 | 94.17 | 94.18 |
| fr | 20,000 | 88.99 | 91.10 | 90.74 | 90.56 | 91.05 |
| es | 20,000 | 89.19 | 91.66 | 91.34 | 90.52 | 91.63 |
| pt | 20,000 | 90.24 | 92.00 | 91.79 | 91.43 | 92.15 |
| it | 20,000 | 90.79 | 92.03 | 91.93 | 91.52 | 92.06 |
| scn | 100 | 1.18 | 77.04 | 75.58 | 77.12 | 81.04 |
| el | 20,000 | 90.07 | 91.49 | 90.91 | 91.28 | 91.04 |
| bg | 20,000 | 92.48 | 93.60 | 93.22 | 93.34 | 93.42 |
| pl | 20,000 | 89.86 | 91.38 | 91.12 | 91.33 | 91.43 |
| ru | 20,000 | 88.52 | 89.86 | 89.77 | 89.98 | 89.88 |
| sl | 15,000 | 93.02 | 93.97 | 93.65 | 93.95 | 93.86 |
| hr | 20,000 | 90.90 | 92.27 | 91.88 | 92.07 | 92.02 |
| af | 5,000 | 89.06 | 91.70 | 91.30 | 91.73 | 91.80 |
| nl | 20,000 | 90.64 | 92.56 | 91.90 | 92.23 | 92.49 |
| de | 20,000 | 87.47 | 89.06 | 88.13 | 88.61 | 88.70 |
| is | 1,000 | 73.98 | 88.04 | 87.28 | 87.63 | 88.29 |
| en | 20,000 | 82.27 | 84.37 | 84.22 | 84.02 | 84.01 |
| da | 20,000 | 91.73 | 93.39 | 92.76 | 92.91 | 93.04 |
| no | 20,000 | 91.98 | 93.46 | 93.05 | 93.34 | 93.49 |
| fo | 100 | 0.00 | 88.21 | 87.58 | 88.70 | 87.69 |
ular, we compare clustering by embeddings and by the nominal parameters.7 Tables 7 and 8 show the resulting clusters obtained by the embedding-based clustering when k = 2, 3, 4, 5 and Table 9 shows the NER results using these clusters and the results using the nominal parameters.
Sample size In the results of embedding-based clustering, the clustering with 10,000 samples always outperforms the clustering with 1,000 samples, regardless of the number of clusters. Thus, the following compares clustering by the nominal parameters and by the embeddings with 10,000 samples. Overall, clustering by the nominal parameters achieved better scores than by embeddings, except in the case of 5 clusters. When the number of the clusters is 5, 11 languages achieved better scores in the nominal parameters while 13 languages did so in the embedding-based clustering. We think this difference is due to the biased distribution in Cluster \#1 of the embedding-based clustering (Table 8),
i.e., 18 languages out of 25 languages are clustered together, while the clusters obtained by the nom-7While there are only 2 clusters available in the headparameter classification (i.e., either head-initial or head-final),
we could test different numbers of clusters using the nominal parameters.
![6_image_0.png](6_image_0.png)
Table 7: Embedding-based clustering with different cluster numbers (using 1,000 samples).
The number of clusters
# 2 3 4 5 ![6_image_2.png](6_image_2.png)
2 es, pt, it, scn es, pt, it, scn es, pt, it, scn es, pt, it, scn 3 - el el el 4 - - bg bg 5 - - - ru
![6_image_3.png](6_image_3.png)
![6_image_1.png](6_image_1.png)
inal parameters distribute relatively evenly (Cluster \#1{Germanic, Slavic}, \#2{Hellenic}, \#3{Romance}, \#4{Indo-Iranian}, \#5{Celtic}). Despite of this difference in the training data, clustering by nominal parameters achieved comparable results.
NER results with IE-all We have also run the NER experiments using all the Indo-European languages (see IE-all in Tables 5 and 6). Since this contains the largest training samples in our experiments, the performance would have been better than the other methods using clusters that normally contain the smaller training data. However, the nominal parameters showed comparable results, and the head parameter outperformed better than the IE-all. Together with the comparison results from the embedding-based method above, we argue that the parameters from theoretical linguistics have a potential to mitigate the data sparsity problem that has been present in the multilingual NLP
tasks.
## Methodological Compatibility Another Point To
note is that some languages seem to be more compatible with a particular method than others. For example, one of low-resource languages, Pashto
(ps) and some high-resource languages, such as Romanian (ro) and Danish (da), showed the best scores when using the clusters obtained by our parameter-based approach. On the other hand, Siciliano (scn) with the IE-all and relatively lowresource languages such as Marathi (mr) and Hindi
(hi) with the embedding-based clustering demonstrated the best scores. These results indicate that different methods might have captured different aspects of languages regardless of the amount of data and that linguistic properties effective in clustering may differ depending on language.
## 5.2 Qualitative Analysis
This section attempts to provide some qualitative analysis based on the predictions obtained in the NER evaluations. We use the prediction data in English from our results of the head parameter clustering (Table 3) and the embedding-based clustering with 10,000 samples (Table 4). In the following examples, h indicates a prediction result from the head parameter clustering, which is correct. The notation e indicates a prediction from the embedding-based clustering, which is incorrect.
In (2h), the named entity representing an organization (ORG) "Allen Fieldhouse" appears after the preposition "at". It is clearly predictable to English speakers that words representing location (LOC) or ORG appear after "at", while it is less likely with words describing person (PER). However, the type of entity was not correctly predicted with the embedding-based clustering (2e). The correct prediction in (2h) seems reasonable if identification of the head along with its complement could facilitate inferring the contexts where a named entity occurs.
| 2 clusters | 3 clusters | 4 clusters | 5 clusters | | | | | | | | | | |
|--------------|--------------|--------------|--------------|-------|-------|--------|-------|-------|--------|-------|--------------|-------|-------|
| lang | #train | noun | #1000 #10000 | noun | #1000 | #10000 | noun | #1000 | #10000 | noun | #1000 #10000 | | |
| cy | 10,000 | 91.57 | 92.22 | 91.88 | 91.57 | 91.73 | 92.42 | 91.57 | 91.73 | 91.98 | 91.57 | 91.27 | 92.64 |
| ga | 1,000 | 85.72 | 84.11 | 84.38 | 85.72 | 84.11 | 84.43 | 85.72 | 84.11 | 84.53 | 85.72 | 84.11 | 85.13 |
| ps | 100 | 53.97 | 55.31 | 55.02 | 55.92 | 54.68 | 53.32 | 55.92 | 54.68 | 55.37 | 55.92 | 52.97 | 53.54 |
| mr | 5,000 | 88.34 | 88.71 | 88.29 | 86.96 | 87.93 | 88.58 | 86.96 | 87.38 | 88.09 | 86.96 | 87.38 | 88.13 |
| hi | 5,000 | 90.09 | 89.48 | 89.42 | 86.89 | 89.18 | 89.90 | 86.89 | 88.66 | 89.70 | 86.89 | 88.66 | 88.98 |
| ro | 20,000 | 94.32 | 94.04 | 94.17 | 94.32 | 94.04 | 93.98 | 93.69 | 94.04 | 94.02 | 93.69 | 94.04 | 94.06 |
| fr | 20,000 | 91.01 | 90.74 | 90.56 | 91.04 | 90.74 | 90.53 | 90.39 | 90.74 | 90.52 | 90.39 | 90.74 | 90.32 |
| es | 20,000 | 91.38 | 91.34 | 90.52 | 91.51 | 91.34 | 90.52 | 90.96 | 91.34 | 90.52 | 90.96 | 91.34 | 90.52 |
| pt | 20,000 | 92.14 | 91.79 | 91.43 | 92.11 | 91.79 | 91.43 | 91.57 | 91.79 | 91.43 | 91.57 | 91.79 | 91.43 |
| it | 20,000 | 92.16 | 91.93 | 91.52 | 92.22 | 91.93 | 91.52 | 91.54 | 91.93 | 91.52 | 91.54 | 91.93 | 91.52 |
| scn | 100 | 76.54 | 75.58 | 77.12 | 80.08 | 75.58 | 77.12 | 76.77 | 75.58 | 77.12 | 76.77 | 75.58 | 77.12 |
| el | 20,000 | 91.18 | 90.91 | 91.28 | 91.21 | 90.40 | 90.07 | 91.18 | 90.40 | 90.07 | 90.07 | 90.07 | 90.07 |
| bg | 20,000 | 93.44 | 93.22 | 93.34 | 93.25 | 92.64 | 93.34 | 93.18 | 92.64 | 92.48 | 93.19 | 92.58 | 92.48 |
| pl | 20,000 | 91.45 | 91.12 | 91.33 | 91.34 | 91.12 | 91.22 | 91.19 | 91.12 | 91.23 | 91.18 | 91.12 | 91.24 |
| ru | 20,000 | 90.01 | 89.77 | 89.98 | 89.96 | 89.32 | 90.02 | 89.97 | 89.18 | 89.66 | 89.81 | 89.18 | 88.52 |
| sl | 15,000 | 93.79 | 93.65 | 93.95 | 93.89 | 93.65 | 93.88 | 93.93 | 93.65 | 93.61 | 93.78 | 93.65 | 93.81 |
| hr | 20,000 | 92.12 | 91.88 | 92.07 | 92.05 | 91.88 | 92.06 | 91.91 | 91.88 | 92.14 | 91.97 | 91.88 | 91.91 |
| af | 5,000 | 91.16 | 91.30 | 91.73 | 91.19 | 91.51 | 90.75 | 91.46 | 90.73 | 91.18 | 91.37 | 90.73 | 91.14 |
| nl | 20,000 | 92.62 | 91.90 | 92.23 | 92.59 | 91.74 | 92.17 | 92.26 | 90.86 | 92.14 | 92.14 | 90.86 | 92.20 |
| de | 20,000 | 88.51 | 88.13 | 88.61 | 88.59 | 88.13 | 88.31 | 88.25 | 88.13 | 88.33 | 88.25 | 88.13 | 88.38 |
| is | 1,000 | 87.65 | 87.28 | 87.63 | 87.54 | 86.75 | 87.44 | 87.92 | 86.51 | 87.77 | 87.51 | 86.51 | 87.71 |
| en | 20,000 | 84.11 | 84.22 | 84.02 | 84.12 | 84.22 | 83.97 | 83.75 | 84.22 | 83.89 | 83.83 | 84.22 | 83.89 |
| da | 20,000 | 93.10 | 92.76 | 92.91 | 93.15 | 92.59 | 93.03 | 93.00 | 92.43 | 92.78 | 92.92 | 92.43 | 92.99 |
| no | 20,000 | 93.48 | 93.05 | 93.34 | 93.32 | 93.14 | 93.24 | 93.31 | 92.79 | 93.27 | 93.24 | 92.79 | 93.17 |
| fo | 100 | 87.01 | 87.58 | 88.70 | 86.61 | 86.35 | 87.44 | 88.70 | 87.72 | 87.76 | 86.78 | 87.72 | 88.33 |
(2) h. His 46 points tied the record for most points scored by an opponent at Allen Fieldhouse.
ORG
e. ... an opponent at **Allen Fieldhouse**.
PER
In (3e), a named entity consisted of three words
"Arlington National Cemetery" was wrongly predicted to be split into ORG and LOC. This indicates that the named entity is not correctly identified as the complement of "in." Given this, we conjecture that clustering by the head parameter can be helpful in correctly predicting the position of the head in the phrase. Specifically, learning from the sequences of a P-head followed by its NP complement may have facilitated identifying the span of the named entity.
(3) h. He died in 1887 and was buried in Arlington National Cemetery.
ORG
$\mathbf{a}\mathbf{a}=\mathbf{a}\mathbf{a}\mathbf{a}$.
e. ... in **Arlington**
ORG
National Cemetery.
LOC
## 5.3 Annotation Errors In The Wikiann Dataset
When examining the incorrect predictions in English data, we found that the WikiAnn dataset contains some non-negligible annotation errors. From our sampling-based examination, we estimate that approximately 1% of annotation errors could be included in the WikiAnn dataset. Examples of the annotation errors found in the WikiAnn dataset are shown in (4) and (5). In (4), *Cleveland, Ohio* is not an organization name. In (5), although *Sanremo* is a named entity indicating location, the unnecessary brackets "[[" could have caused an error in its annotation.
(4) He was born in **Cleveland , Ohio**.
ORG
(5) Washhouse in [[Sanremo, **Italy**,
LOC
...
Since the annotations of the WikiAnn dataset were machine-generated, some errors could have occurred in its process. However, these annotation errors need to be revised to improve the reliability of NER evaluations.
## 6 Conclusion
We have proposed two language clustering methods based on the morpho-syntactic parameters proposed in theoretical linguistics. We showed that these clustering methods outperformed the embedding-based clustering in multilingual NER
with Indo-European languages. We have also compared the model using all the Indo-European languages as the training data. Despite the large difference in the data size, our approach outperformed this model as well. These results suggest that parameters in theoretical linguistics have a potential utility in multilingual NLP tasks and that this direction is worth exploring.
Future work will extend this approach to other language families as well as different multilingual tasks, such as machine translation. Another direction would be to probe the clusters derived from the embedding-based method to explore features that might not have been captured by our approach or any approaches that make use of explicit linguistic features.
## Limitations
The morpho-syntactic parameters used in this study are just a fraction of various other linguistic parameters that have been proposed in theoretical syntax
(e.g., Roberts 2019). A set of optimal language parameters for language clustering may vary depending on the target task. It remains to be seen whether and how various parameters in theoretical linguistics could improve different NLP tasks. For example, cross-lingual transfer learning may be performed more effectively by carefully tailoring the linguistic parameters to a particular task, like what we have done for NER.
Related to the above point, one limitation of our approach would be the fact that some languages have not yet been investigated well in theoretical linguistics, particularly some underdocumented or endangered languages. Even as for welldocumented languages in theoretical linguistics, some parameters still remain controversial, such as the so-called NP/DP parameter (e.g., Boškovic´
2012). Thus, our approach proceeds in tandem with the advancement of theoretical linguistics.
## Ethics Statement
We used a freely available dataset and a pre-trained model from the Hugging Face Hub for our experiments. We selected a pre-trained model with an appropriate size (XLM-RoBERTa-base) given our purpose of use. We needed to perform many rounds of clustering and fine-tuning for the pre-trained model. Therefore, we set preliminary experiments beforehand with a smaller sample size for each step to ensure that the experiments could be performed effectively.
## Acknowledgements
This work was supported by JSPS KAKENHI
Grant Number JP21H04901.
## References
David Ifeoluwa Adelani, Jade Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Sebastian Ruder, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen H. Muhammad, Chris Chinenye Emezue, Joyce Nakatumba-Nabende, Perez Ogayo, Aremu Anuoluwapo, Catherine Gitau, Derguene Mbaye, Jesujoba Alabi, Seid Muhie Yimam, Tajuddeen Rabiu Gwadabe, Ignatius Ezeani, Rubungo Andre Niyongabo, Jonathan Mukiibi, Verrah Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Chukwuneke, Nkiruka Odu, Eric Peter Wairagala, Samuel Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane MBOUP, Dibora Gebreyohannes, Henok Tilaye, Kelechi Nwaike, Degaga Wolde, Abdoulaye Faye, Blessing Sibanda, Orevaoghene Ahia, Bonaventure F. P. Dossou, Kelechi Ogueji, Thierno Ibrahima DIOP, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Marengereke, and Salomey Osei. 2021. MasakhaNER: Named entity recognition for African languages. Transactions of the Association for Computational Linguistics, 9:1116–1131.
Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431–444.
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. *Transactions* of the Association for Computational Linguistics, 7:597–610.
Željko Boškovic. 2012. ´ *On NPs and Clauses*, page 179–246. De Gruyter Mouton, Berlin, Boston.
Andrea Ceolin, Cristina Guardiano, Giuseppe Longobardi, Monica Alexandrina Irimia, Luca Bortolussi, and Andrea Sgarro. 2021. At the boundaries of syntactic prehistory. *Philosophical Transactions of the* Royal Society B, 376.
Noam Chomsky. 1981. Lectures on Government and Binding. De Gruyter, Berlin, Germany.
Alexandra Chronopoulou, Dario Stojanovski, and Alexander Fraser. 2022. Language-family adapters for multilingual neural machine translation. *ArXiv*,
abs/2209.15236.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Wietse de Vries, Martijn Wieling, and Malvina Nissim.
2022. Make the best of cross-lingual transfer: Evidence from POS tagging with over 100 languages.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7676–7685, Dublin, Ireland.
Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Tejas Dhamecha, Rudra Murthy, Samarth Bharadwaj, Karthik Sankaranarayanan, and Pushpak Bhattacharyya. 2021. Role of Language Relatedness in Multilingual Fine-tuning of Language Models: A
Case Study in Indo-Aryan Languages. In *Proceedings of the 2021 Conference on Empirical Methods* in Natural Language Processing, pages 8584–8595, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Matthew S. Dryer and Martin Haspelmath, editors. 2013.
WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig.
Yoshinari Fujinuma, Jordan Boyd-Graber, and Katharina Kann. 2022. Match the script, adapt if multilingual: Analyzing the effect of multilingual pretraining on cross-lingual transferability. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1500–1512, Dublin, Ireland. Association for Computational Linguistics.
Daniela Gerz, Ivan Vulic, Edoardo Maria Ponti, Roi ´
Reichart, and Anna Korhonen. 2018. On the relation between linguistic typology and (limitations of)
multilingual language modeling. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, pages 316–327, Brussels, Belgium. Association for Computational Linguistics.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP
world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics.
Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019.
Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125–3135, Florence, Italy. Association for Computational Linguistics.
Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.
Helen O'Horan, Yevgeni Berzak, Ivan Vulic, Roi Re- ´
ichart, and Anna Korhonen. 2016. Survey on the use of typological information in natural language processing. In *Proceedings of COLING 2016, the* 26th International Conference on Computational Linguistics: Technical Papers, pages 1297–1308, Osaka, Japan. The COLING 2016 Organizing Committee.
Arturo Oncevay, Barry Haddow, and Alexandra Birch.
2020. Bridging linguistic typology and multilingual machine translation with multi-view language representations. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 2391–2406, Online. Association for Computational Linguistics.
Vaidehi Patil, Partha Talukdar, and Sunita Sarawagi.
2022. Overlap-based vocabulary generation improves cross-lingual transfer among related languages. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 219–233, Dublin, Ireland. Association for Computational Linguistics.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996–5001, Florence, Italy. Association for Computational Linguistics.
Edoardo Maria Ponti, Helen O'Horan, Yevgeni Berzak, Ivan Vulic, Roi Reichart, Thierry Poibeau, Ekate- ´
rina Shutova, and Anna Korhonen. 2019. Modeling language variation and universals: A survey on typological linguistics for natural language processing.
Computational Linguistics, 45(3):559–601.
Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164, Florence, Italy. Association for Computational Linguistics.
Ryokan Ri and Yoshimasa Tsuruoka. 2022. Pretraining with artificial language: Studying transferable knowledge in language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7302–
7315, Dublin, Ireland. Association for Computational Linguistics.
Ian Roberts. 2019. *Parameter Hierarchies and Universal Grammar*. Oxford University Press.
Fahimeh Saleh, Wray Buntine, Gholamreza Haffari, and Lan Du. 2021. Multilingual neural machine translation: Can linguistic hierarchies help? In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 1313–1330, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Kyle Shaffer. 2021. Language clustering for multilingual named entity recognition. In *Findings of the* Association for Computational Linguistics: EMNLP
2021, pages 40–45, Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with language clustering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 963–973, Hong Kong, China. Association for Computational Linguistics.
Simone Tedeschi, Valentino Maiorca, Niccolò Campolungo, Francesco Cecconi, and Roberto Navigli. 2021.
WikiNEuRal: Combined neural and knowledgebased silver data creation for multilingual NER. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2521–2533, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Robert L. Thorndike. 1953. Who belongs in the family?
Psychometrika, 18:267–276.
## A Appendix
The summary of the languages used in our experiments is shown in Table 10.
Table 11 shows the NER evaluations of head parameter-based clustering with standard deviation scores in parentheses.
Tables 12 and 13 represent the NER evaluations when we set the number of clusters to {2, 3} and
{4, 5}, respectively, with standard deviations in parentheses (see Section 5.1 for the details).
| ISO 639-1 Code | Language | Sub-family |
|------------------|----------------|--------------|
| cy | Welsh | Celtic |
| ga | Irish | |
| ps | Pashto Marathi | Indo-Iranian |
| mr hi | Hindi | |
| ro | Romanian | |
| fr | French | |
| es | Spanish | Romance |
| pt | Portuguese | |
| it | Italian | |
| scn | Siciliano | |
| el | Greek | Hellenic |
| bg | Bulgarian | |
| pl | Polish | |
| ru | Russian | Slavic |
| sl | Slovenian | |
| hr | Serbo-Croatian | |
| af | Afrikaans | |
| nl | Dutch | |
| de | German | |
| is | Icelandic | Germanic |
| en | English | |
| da | Danish | |
| no | Norwegian | |
| fo | Faroese | |
| 2 clusters | | | | | | |
|--------------|--------|--------------|--------------|--------------|--------------|--------------|
| lang | #train | mono | head | #1000 | #10000 | family |
| cy | 10,000 | 91.09 (0.30) | 93.15 (0.03) | 92.22 (0.37) | 91.88 (0.37) | 92.95 (0.45) |
| ga | 1,000 | 76.51 (1.18) | 85.37 (0.54) | 84.11 (0.61) | 84.38 (0.21) | 84.90 (0.45) |
| ps | 100 | 0.00 (0.00) | 55.92 (2.84) | 55.31 (1.40) | 55.02 (0.76) | 52.22 (1.31) |
| mr | 5,000 | 85.5 (0.03) | 86.96 (0.39) | 88.71 (0.66) | 88.29 (0.40) | 88.34 (0.37) |
| hi | 5,000 | 86.06 (0.54) | 86.89 (0.30) | 89.48 (0.42) | 89.42 (0.80) | 89.47 (0.45) |
| ro | 20,000 | 92.64 (0.11) | 94.43 (0.27) | 94.04 (0.12) | 94.17 (0.11) | 94.18 (0.03) |
| fr | 20,000 | 88.99 (0.14) | 91.10 (0.09) | 90.74 (0.09) | 90.56 (0.13) | 91.05 (0.15) |
| es | 20,000 | 89.19 (0.12) | 91.66 (0.31) | 91.34 (0.10) | 90.52 (0.19) | 91.63 (0.02) |
| pt | 20,000 | 90.24 (0.06) | 92.00 (0.22) | 91.79 (0.07) | 91.43 (0.06) | 92.15 (0.06) |
| it | 20,000 | 90.79 (0.21) | 92.03 (0.12) | 91.93 (0.11) | 91.52 (0.07) | 92.06 (0.10) |
| scn | 100 | 1.18 (1.67) | 77.04 (1.46) | 75.58 (1.20) | 77.12 (1.63) | 81.04 (2.88) |
| el | 20,000 | 90.07 (0.15) | 91.49 (0.05) | 90.91 (0.08) | 91.28 (0.17) | 91.04 (0.09) |
| bg | 20,000 | 92.48 (0.07) | 93.60 (0.17) | 93.22 (0.11) | 93.34 (0.03) | 93.42 (0.11) |
| pl | 20,000 | 89.86 (0.08) | 91.38 (0.11) | 91.12 (0.04) | 91.33 (0.10) | 91.43 (0.17) |
| ru | 20,000 | 88.52 (0.14) | 89.86 (0.16) | 89.77 (0.07) | 89.98 (0.12) | 89.88 (0.02) |
| sl | 15,000 | 93.02 (0.04) | 93.97 (0.27) | 93.65 (0.19) | 93.95 (0.10) | 93.86 (0.16) |
| hr | 20,000 | 90.90 (0.22) | 92.27 (0.03) | 91.88 (0.12) | 92.07 (0.13) | 92.02 (0.04) |
| af | 5,000 | 89.06 (0.09) | 91.70 (0.31) | 91.30 (0.57) | 91.73 (0.31) | 91.80 (0.20) |
| nl | 20,000 | 90.64 (0.15) | 92.56 (0.23) | 91.90 (0.10) | 92.23 (0.07) | 92.49 (0.07) |
| de | 20,000 | 87.47 (0.10) | 89.06 (0.32) | 88.13 (0.05) | 88.61 (0.06) | 88.70 (0.01) |
| is | 1,000 | 73.98 (2.36) | 88.04 (0.40) | 87.28 (0.39) | 87.63 (0.56) | 88.29 (0.77) |
| en | 20,000 | 82.27 (0.14) | 84.37 (0.15) | 84.22 (0.23) | 84.02 (0.06) | 84.01 (0.09) |
| da | 20,000 | 91.73 (0.11) | 93.39 (0.27) | 92.76 (0.09) | 92.91 (0.15) | 93.04 (0.03) |
| no | 20,000 | 91.98 (0.13) | 93.46 (0.16) | 93.05 (0.07) | 93.34 (0.20) | 93.49 (0.09) |
| fo | 100 | 0.00 (0.00) | 88.21 (1.52) | 87.58 (1.09) | 88.70 (1.22) | 87.69 (0.75) |
| 2 clusters | 3 clusters | | | | | | |
|--------------|--------------|---------------------------|--------------|--------------|--------------|--------------|--------|
| lang | #train | noun | #1000 | #10000 | noun | #1000 | #10000 |
| cy | 10,000 | 91.57 (0.12) 92.22 (0.37) | 91.88 (0.37) | 91.57 (0.12) | 91.73 (0.03) | 92.42 (0.59) | |
| ga | 1,000 | 85.72 (0.04) 84.11 (0.61) | 84.38 (0.21) | 85.72 (0.04) | 84.11 (0.61) | 84.43 (0.60) | |
| ps | 100 | 53.97 (3.36) 55.31 (1.40) | 55.02 (0.76) | 55.92 (2.84) | 54.68 (1.07) | 53.32 (2.06) | |
| mr | 5,000 | 88.34 (0.35) 88.71 (0.66) | 88.29 (0.40) | 86.96 (0.39) | 87.93 (0.31) | 88.58 (0.44) | |
| hi | 5,000 | 90.09 (0.29) 89.48 (0.42) | 89.42 (0.80) | 86.89 (0.30) | 89.18 (0.54) | 89.90 (0.29) | |
| ro | 20,000 | 94.32 (0.10) 94.04 (0.12) | 94.17 (0.11) | 94.32 (0.05) | 94.04 (0.12) | 93.98 (0.12) | |
| fr | 20,000 | 91.01 (0.04) 90.74 (0.09) | 90.56 (0.13) | 91.04 (0.03) | 90.74 (0.09) | 90.53 (0.02) | |
| es | 20,000 | 91.38 (0.18) 91.34 (0.10) | 90.52 (0.19) | 91.51 (0.08) | 91.34 (0.10) | 90.52 (0.19) | |
| pt | 20,000 | 92.14 (0.12) 91.79 (0.07) | 91.43 (0.06) | 92.11 (0.10) | 91.79 (0.07) | 91.43 (0.06) | |
| it | 20,000 | 92.16 (0.15) 91.93 (0.11) | 91.52 (0.07) | 92.22 (0.12) | 91.93 (0.11) | 91.52 (0.07) | |
| scn | 100 | 76.54 (0.92) 75.58 (1.20) | 77.12 (1.63) | 80.08 (2.69) | 75.58 (1.20) | 77.12 (1.63) | |
| el | 20,000 | 91.18 (0.21) 90.91 (0.08) | 91.28 (0.17) | 91.21 (0.01) | 90.40 (0.11) | 90.07 (0.15) | |
| bg | 20,000 | 93.44 (0.07) 93.22 (0.11) | 93.34 (0.03) | 93.25 (0.02) | 92.64 (0.07) | 93.34 (0.15) | |
| pl | 20,000 | 91.45 (0.09) 91.12 (0.04) | 91.33 (0.10) | 91.34 (0.02) | 91.12 (0.04) | 91.22 (0.05) | |
| ru | 20,000 | 90.01 (0.08) 89.77 (0.07) | 89.98 (0.12) | 89.96 (0.18) | 89.32 (0.06) | 90.02 (0.04) | |
| sl | 15,000 | 93.79 (0.10) 93.65 (0.19) | 93.95 (0.10) | 93.89 (0.22) | 93.65 (0.19) | 93.88 (0.09) | |
| hr | 20,000 | 92.12 (0.11) 91.88 (0.12) | 92.07 (0.13) | 92.05 (0.07) | 91.88 (0.12) | 92.06 (0.11) | |
| af | 5,000 | 91.16 (0.16) 91.30 (0.57) | 91.73 (0.31) | 91.19 (0.37) | 91.51 (0.40) | 90.75 (0.17) | |
| nl | 20,000 | 92.62 (0.02) 91.90 (0.10) | 92.23 (0.07) | 92.59 (0.17) | 91.74 (0.12) | 92.17 (0.16) | |
| de | 20,000 | 88.51 (0.04) 88.13 (0.05) | 88.61 (0.06) | 88.59 (0.13) | 88.13 (0.05) | 88.31 (0.13) | |
| is | 1,000 | 87.65 (0.23) 87.28 (0.39) | 87.63 (0.56) | 87.54 (0.24) | 86.75 (0.39) | 87.44 (0.16) | |
| en | 20,000 | 84.11 (0.29) 84.22 (0.23) | 84.02 (0.06) | 84.12 (0.09) | 84.22 (0.23) | 83.97 (0.05) | |
| da | 20,000 | 93.10 (0.11) 92.76 (0.09) | 92.91 (0.15) | 93.15 (0.18) | 92.59 (0.15) | 93.03 (0.10) | |
| no | 20,000 | 93.48 (0.06) 93.05 (0.07) | 93.34 (0.20) | 93.32 (0.13) | 93.14 (0.02) | 93.24 (0.06) | |
| fo | 100 | 87.01 (0.90) 87.58 (1.09) | 88.70 (1.22) | 86.61 (0.59) | 86.35 (1.26) | 87.44 (0.66) | |
| 4 clusters | 5 clusters | | | | | | |
|--------------|--------------|---------------------------|--------------|--------------|--------------|--------------|--------------|
| lang | #train | noun | #1000 | #10000 | noun | #1000 | #10000 |
| cy | 10,000 | 91.57 (0.12) 91.73 (0.03) | 91.98 (0.42) | 91.57 (0.12) | 91.27 (0.34) | 92.64 (0.13) | |
| ga | 1,000 | 85.72 (0.04) 84.11 (0.61) | 84.53 (0.27) | 85.72 (0.04) | 84.11 (0.61) | 85.13 (0.81) | |
| ps | 100 | 55.92 (2.84) 54.68 (1.07) | 55.37 (0.69) | 55.92 (2.84) | 52.97 (2.53) | 53.54 (2.79) | |
| mr | 5,000 | 86.96 (0.39) 87.38 (0.86) | 88.09 (0.19) | 86.96 (0.39) | 87.38 (0.86) | 88.13 (0.52) | |
| hi | 5,000 | 86.89 (0.30) 88.66 (0.37) | 89.70 (0.09) | 86.89 (0.30) | 88.66 (0.37) | 88.98 (0.38) | |
| ro | 20,000 | 93.69 (0.04) 94.04 (0.12) | 94.02 (0.08) | 93.69 (0.04) | 94.04 (0.12) | 94.06 (0.13) | |
| fr | 20,000 | 90.39 (0.03) 90.74 (0.09) | 90.52 (0.21) | 90.39 (0.03) | 90.74 (0.09) | 90.32 (0.14) | |
| es | 20,000 | 90.96 (0.13) 91.34 (0.10) | 90.52 (0.19) | 90.96 (0.13) | 91.34 (0.10) | 90.52 (0.19) | |
| pt | 20,000 | 91.57 (0.06) 91.79 (0.07) | 91.43 (0.06) | 91.57 (0.06) | 91.79 (0.07) | 91.43 (0.06) | |
| it | 20,000 | 91.54 (0.06) 91.93 (0.11) | 91.52 (0.07) | 91.54 (0.06) | 91.93 (0.11) | 91.52 (0.07) | |
| scn | 100 | 76.77 (1.32) 75.58 (1.20) | 77.12 (1.63) | 76.77 (1.32) | 75.58 (1.20) | 77.12 (1.63) | |
| el | 20,000 | 91.18 (0.13) | 90.4 (0.11) | 90.07 (0.15) | 90.07 (0.15) | 90.07 (0.15) | 90.07 (0.15) |
| bg | 20,000 | 93.18 (0.10) 92.64 (0.07) | 92.48 (0.07) | 93.19 (0.10) | 92.58 (0.03) | 92.48 (0.07) | |
| pl | 20,000 | 91.19 (0.02) 91.12 (0.04) | 91.23 (0.09) | 91.18 (0.10) | 91.12 (0.04) | 91.24 (0.05) | |
| ru | 20,000 | 89.97 (0.15) 89.18 (0.18) | 89.66 (0.02) | 89.81 (0.20) | 89.18 (0.18) | 88.52 (0.14) | |
| sl | 15,000 | 93.93 (0.18) 93.65 (0.19) | 93.61 (0.02) | 93.78 (0.06) | 93.65 (0.19) | 93.81 (0.06) | |
| hr | 20,000 | 91.91 (0.06) 91.88 (0.12) | 92.14 (0.10) | 91.97 (0.09) | 91.88 (0.12) | 91.91 (0.17) | |
| af | 5,000 | 91.46 (0.70) 90.73 (0.05) | 91.18 (0.12) | 91.37 (0.31) | 90.73 (0.05) | 91.14 (0.34) | |
| nl | 20,000 | 92.26 (0.11) 90.86 (0.17) | 92.14 (0.04) | 92.14 (0.14) | 90.86 (0.17) | 92.20 (0.15) | |
| de | 20,000 | 88.25 (0.09) 88.13 (0.05) | 88.33 (0.07) | 88.25 (0.21) | 88.13 (0.05) | 88.38 (0.06) | |
| is | 1,000 | 87.92 (0.83) 86.51 (0.09) | 87.77 (0.40) | 87.51 (0.37) | 86.51 (0.09) | 87.71 (0.25) | |
| en | 20,000 | 83.75 (0.19) 84.22 (0.23) | 83.89 (0.14) | 83.83 (0.03) | 84.22 (0.23) | 83.89 (0.03) | |
| da | 20,000 | 93.00 (0.05) 92.43 (0.08) | 92.78 (0.09) | 92.92 (0.10) | 92.43 (0.08) | 92.99 (0.04) | |
| no | 20,000 | 93.31 (0.11) 92.79 (0.00) | 93.27 (0.06) | 93.24 (0.07) | 92.79 (0.00) | 93.17 (0.13) | |
| fo | 100 | 88.70 (1.58) 87.72 (0.82) | 87.76 (1.06) | 86.78 (2.33) | 87.72 (0.82) | 88.33 (0.28) | |
|
skerath-etal-2023-native | Native Language Prediction from Gaze: a Reproducibility Study | https://aclanthology.org/2023.acl-srw.26 | Numerous studies found that the linguistic properties of a person{'}s native language affect the cognitive processing of other languages. However, only one study has shown that it was possible to identify the native language based on eye-tracking records of natural L2 reading using machine learning. A new corpus allows us to replicate these results on a more interrelated and larger set of native languages. Our results show that comparable classification performance is maintained despite using less data. However, analysis shows that the correlation between L2 eye movements and native language similarity may be more complex than the original study found. | # Native Language Prediction From Gaze: A Reproducibility Study
Lina Skerath IT University of Copenhagen Paulina Toborek IT University of Copenhagen Anita Zielinska ´
IT University of Copenhagen Maria Barrett IT University of Copenhagen [email protected]
## Abstract
Numerous studies found that the linguistic properties of a person's native language affect the cognitive processing of other languages.
However, only one study has shown that it was possible to identify the native language based on eye-tracking records of natural L2 reading using machine learning. A new corpus allows us to replicate these results on a more interrelated and larger set of native languages. Our results show that comparable classification performance is maintained despite using less data.
However, analysis shows that the correlation between L2 eye movements and native language similarity may be more complex than the original study found.
## 1 Introduction
Research has shown that a speaker's native language can affect their learning and performance in a foreign language (Berkes and Flynn, 2012; Alonso, 2016; Cop et al., 2017). The eye movements of a reader, namely fixations and saccades, are a window to the online cognitive processing of text with milliseconds accurateness (Rayner, 1998).
Native speakers of different languages may exhibit different eye movement patterns when reading a foreign language, with those reading in their native language making shorter and more frequent fixations while making longer fixations due to the increased cognitive load when reading in other languages (Hopp, 2010; Rayner et al., 2012; Berzak et al., 2022).
Several researchers have examined eyemovement patterns across different nationalities, exploring various aspects such as sentence reading times, fixation count, and saccade duration (Cop et al., 2015). Roberts and Siyanova-Chanturia
(2013) showed that gaze data could be used for examining, e.g., reading processes, second language acquisition, and discourse processing, as well as give relevant insights into fields of
## Rob Van Der Goot
IT University of Copenhagen [email protected] second language acquisition and processing. Early research in Native Language Identification (Tsur and Rappoport, 2007) focused on the relationship between a person's native language and their writing in a second language, while Berzak et al.
(2017) for the first time predicted a reader's native language using machine learning across four languages (Chinese, Japanese, Portuguese, and Spanish) using only eye-tracking features from natural reading in their second language (L2), English. The study leveraged the knowledge that different languages have unique features, such as word order, grammatical rules, and phonological features, that affect language processing in other languages.
Despite a general interest in eye-tracking corpora for L2 reading, e.g., (Cop et al., 2017), until recently, there has not been a publicly available dataset with enough languages to reproduce the results of Berzak et al. (2017). Berzak et al.
(2017) used a subset of the licensed CELER dataset
(Berzak et al., 2022) which is the largest eyetracking corpus by the number of L2 readers encompassing five different native language backgrounds. The Multilingual Eye-movement COrpus
(MECO) L2 dataset (Kuperman et al., 2022)
1comprises English L2 reading by 12 different language backgrounds and allows replication of the findings by Berzak et al. (2017) on a different and larger set of languages which is why we employ the MECO
dataset for this study.
In this study, we replicate the study by Berzak et al. (2017) and classify the native language of the reader from eye-tracking records of them reading English from another corpus.2 We include readers from seven different language backgrounds that are more interrelated than the original study; the
| LANGUAGE | ISO | n PARTICIPANTS |
|------------|-------|------------------|
| Estonian | et | 23 |
| English | en | 21 |
| Finnish | fi | 23 |
| German | de | 23 |
| Hebrew | he | 18 |
| Italian | it | 20 |
| Spanish | es | 21 |
Table 1: Number of participants by native language and language ISO code in the data set.
linguistic similarity of the languages used in this study is in the range of 0.64–0.893. The original study did not explore languages in this range but only less similar languages (linguistic similarity
<.5) plus one very similar language pair (linguistic similarity >.95).
## 2 Data
The MECO data was collected in 12 eye-tracking laboratories around the world. Participants were young adults ranging from 18 to 39 years old with high levels of L2 proficiency, which was ensured through English instruction in higher education.
For more comprehensive information about the dataset, we refer to the authors' paper (Kuperman et al., 2022).
The MECO data set includes eye-tracking input gathered from native speakers of 12 languages recorded during reading an English encyclopedic text. Due to an insufficient number of participants in some of the cohorts, we used the subset of seven languages with the most participants. To avoid overfitting, we randomly undersampled 23 participants for the two largest cohorts, equivalent in size to the third largest group within the dataset as shown in Table 1. Berzak et al. (2017) used 36 to 37 readers for each language.
We only use the texts read by all the participants (also named "shared regime" in Berzak et al.
(2017)). The total amount of words read per participant is 595 words, while the original study used 900 words. The feature set employed comprises three word-based measurements: First Fixation duration (FF), First Pass duration (FP) which is the sum of all fixations during the first pass reading of the word, and Total fixation duration (TF).
## 3 Methods
In this section, we describe the methods employed to replicate Berzak et al. (2017), giving a detailed description of the steps deviating from the setup of the original study.
## 3.1 Features
All data gaps encountered in the MECO dataset related to words marked as skipped by participants during reading, so it is legitimized to replace such shortages with zeros. Additionally, following the approach of the original research, we normalize all fixation times with the reading time of the entire sentence. The final data set consists of three fixation measures columns per word or cluster, where each row represents data collected from one person.
Words in Fixed Context (WFC) The WFC feature set considers the fixation times for specific words, and no aggregation is performed on the unigram level. The bigrams and trigrams fixation times are then obtained by simply summing values of unigrams that are a part of the interest area.
Columns of the dataset consist of the 3 features for every n-gram in the corpus - 5364 features in total.
Syntactic Clusters (SC) In Berzak et al. (2017),
syntactic features were obtained from the original Penn Treebank. As no manually annotated syntactic features are available for our data we use predicted syntactic information instead (described in detail in Appendix B). Following Berzak et al.
(2017) we use the average FF, FP and TF over ngrams (n=1-3) of the UPOS labels, PTB POS tags, and UD dependency labels as features. For example: the average fixation time of a participant on the UPOS sequence ADV ADJ is a single feature.
Information Clusters (IC) Next to grouping the features by syntactic labels, the average fixation times were calculated for clusters created by the length of the words, measured as a number of characters. For bi- and trigrams, lengths of words were summed and thus clusters were created based on this sum.
## 3.2 Model
For interpretation, we compare to a majority class baseline. Following the original paper, we use a loglinear model to obtain the Native Language Identification from Reading (NLIR) performance as well as the model-based language similarity (3.3.2). We implement the model using scikit-learn (Pedregosa
| Shared regime | | | |
|-----------------|----------|-----------|-------|
| Majority Class | 15.44 | | |
| unigrams | +bigrams | +trigrams | |
| IC | 47.52 | 48.19 | 48.86 |
| SC | 57.62 | 73.29 | 76.57 |
| SC+IC | 52.29 | 73.29 | 77.95 |
| WFC | 81.29 | 79.29 | 77.95 |
et al., 2011) and use the 'lbfgs' solver in accordance with the original paper. A reader's native language encoded as a categorical variable is used as the model's target variable. We report our results based on 10-fold cross-validation. To preserve a similar distribution of languages in train and test data, we employ a stratified K-Folds split. We train the same model on the three feature sets described in the previous section and an additional combination of SC and IC feature sets.
To ensure comparability with the original paper despite the different amounts of languages, we analyze model performance with different amounts of languages. We train the model on each possible combination of languages and group them by the number of languages. We take the mean accuracy score of each group size and plot the results (figure 1). We note that our classes are slightly imbalanced, so arguably F1 could be a better metric but to compare to previous work and because the classes are almost balanced, we choose to use accuracy.
## 3.3 Similarity Metrics
Berzak et al. (2014, 2017) suggest a link between English as a second language (ESL) production and linguistic similarities. To recreate the language similarity plots from the original study, we derive the same model-based metric and a cosine similarity based on syntactic and geographical features of a language.
## 3.3.1 Linguistic-Based Similarity
We use the same procedure and data as the original study to derive this similarity metric.The data is obtained from URIEL Typological Compendium (Littell et al., 2017a). Information selected is data derived from the World Atlas of Language Structures, features from Syntactic Structures of the World's Languages, and data from parsing the prose topological descriptions in Ethnologue. This information is supplemented by data on the languages belonging to different families, retrieved from Glottolog's world language tree. We use lang2vec (Littell et al., 2017b) for obtaining the complete feature vectors (with KNN completion). After truncating features with the same values among all languages,4 we get a total of 189 features. The similarity scores between languages are then calculated as a cosine similarity of their feature vectors.
## 3.3.2 Model-Based Similarity
The model-based similarity captures native language similarities paralleled in reading patterns.
In the same way as Berzak et al. (2017), we define
"the classification uncertainty for a pair of native languages y and y
′in our data collection D, as the average probability assigned by the NLIR classifier to one language given the other being the true native language." It is called English Reading Similarity (ERS) and is defined as:
$$E R S_{y,y^{\prime}}=\frac{\sum\limits_{(x,y)\in D_{y}}p(y^{\prime}|x;\theta)+\sum\limits_{(x,y^{\prime})\in D_{y}^{\prime}}p(y|x;\theta)}{|D_{y}|+|D_{y}^{\prime}|}$$
The model, trained on all seven languages to perform NLIR, is used to extract language similarity. We separately feed test data sets for a single language y at a time and extract prediction probabilities for each other language y
′. Then a mean of the two language probabilities is calculated.
It is suggested that a higher classification uncertainty indicates greater language similarity. In figure 2 we plot the similarity metrics against each other to test this in the original study implied link.
## 4 Results
Table 2 presents the results for the baseline and the log-linear model when using 10-fold crossvalidation. The model is trained and evaluated on all seven languages.
All variants of the model perform substantially better than the majority class baseline. Similarly to the results by Berzak et al. (2017), the model trained on the WFC feature set achieves the highest cross-validation accuracy (81.29%). While the model trained on syntactic and information cluster features improves with additional bi- and tri-grams, the words in the fixed context feature set do not follow this trend which differs from the original paper's results.
4Note that this can be considered non-standard, as the features of a language might impact the similarity between two other languages. We mainly used this strategy to follow the previous setup
![3_image_1.png](3_image_1.png)
![3_image_2.png](3_image_2.png)
![3_image_3.png](3_image_3.png)
Since the original study was done with a different number of languages, we investigate how the performance changes depending on the number of target classes. Figure 1 shows the changes in model performance depending on how many target classes it has. E.g., 3 on the x axis corresponds to a group of all combinations (C
3 7
) of any three languages in the train set. The y-axis shows the mean performance of all classifiers in that group.
The results of each classifier in a group vary, thus, we plot the mean performance. As expected, we see that for all feature sets the performance drops when the number of language increase.
## 5 Discussion
As evident from Table 2, our model seems to perform similarly to the original paper's results (Table 3, Appendix A). We can not compare these results directly due to the difference in languages, yet, for all combinations of four languages in our data set,
![3_image_0.png](3_image_0.png)
we observe in Figure 1) that the average performance is 81 % (compared to 71% in Berzak et al.
(2017). However, since we train our model with 3 more languages than the original study and still get similar results, we can confirm that machine learning models can pick up the differences in reading patterns of different native language readers.
Contrary to the original paper, we do not see large improvements in performance with additional bigram and trigram features. We also explore language similarity by looking at the suggested positive correlation between classification uncertainty and linguistic similarities.
Results from Berzak et al. (2017) are included in Figure 5, Appendix A for convenience. The plot reproduced in Figure 2 does not seem to confirm this hypothesis as no clear trend is visible. We observe that the uncertainty when classifying native speakers vs. L2 reading is substantially lower
(mean 0.01) than when distinguishing two groups of L2 readers from those of different native languages (mean 0.11). We also compute a correlation coefficient of 0.06 which does not indicate a significant correlation found by Berzak et al. Similarly, Ward hierarchical clustering for linguistic similarities and classification uncertainty, presented in Figure 3, does not present a closeness between grouping using either of these metrics. The plots have little overlaps on the set of languages we used, contrary to the original finding, see Figure 4, and share a little similarity both in terms of languages in each cluster and the general shape of the tree.
This suggests that the relation between the English reading patterns and language similarities of the native language found by Berzak et al. (2017) may be more nuanced than the original plot (Figure 4, Appendix A) initially suggests.
## 6 Conclusion
We replicate the finding of Berzak et al. (2017) and are the first to confirm their finding that a reader's native language can be predicted from gaze patterns when reading English text. Having a larger set of more interrelated languages than the original study, we achieve comparable classification results supporting the suggested cross-linguistic influence from the native language to L2. Despite the satisfactory performance of the NLIR model, the results of investigating the relationship between reading patterns and linguistic similarity are not as straightforward. We believe the relation to be more nuanced than suggested as we are not able to replicate the same outcomes.
## Acknowledgements
Maria Barrett is supported by a research grant
(34437) from VILLUM FONDEN.
## References
R.A. Alonso. 2016. *Crosslinguistic Influence in Second* Language Acquisition. G - Reference,Information and Interdisciplinary Subjects Series. Multilingual Matters.
Eva Berkes and Suzanne Flynn. 2012. Multilingualism:
New perspectives on syntactic development.
Yevgeni Berzak, Chie Nakamura, Suzanne Flynn, and Boris Katz. 2017. Predicting native language from gaze. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 541–551, Vancouver, Canada. Association for Computational Linguistics.
Yevgeni Berzak, Chie Nakamura, Amelia Smith, Emily Weng, Boris Katz, Suzanne Flynn, and Roger Levy.
2022. Celer: A 365-participant corpus of eye movements in l1 and l2 english reading. *Open Mind*, 6:41–
50.
Yevgeni Berzak, Roi Reichart, and Boris Katz. 2014.
Reconstructing native language typology from foreign language usage. In *Proceedings of the Eighteenth Conference on Computational Natural Language Learning*, pages 21–29, Ann Arbor, Michigan.
Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Uschi Cop, Nicolas Dirix, Denis Drieghe, and Wouter Duyck. 2017. Presenting geco: An eyetracking corpus of monolingual and bilingual sentence reading.
Behavior research methods, 49:602–615.
Uschi Cop, Denis Drieghe, and Wouter Duyck. 2015.
Eye movement patterns in natural reading: A comparison of monolingual and bilingual reading of a novel.
PLOS ONE, 10(8):1–38.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Holger Hopp. 2010. Ultimate attainment in l2 inflection: Performance similarities between non-native and native speakers. *Lingua*, 120:901–931.
Victor Kuperman, Noam Siegelman, Sascha Schroeder, Cengiz Acarturk, Svetlana Alexeeva, Simona Amenta, Raymond Bertram, Rolando Bonandrini, Marc Brysbaert, Daria Chernova, Sara Fonseca, Nicolas Dirix, Wouter Duyck, Argyro Fella, Ram Frost, Carolina Gattei, Areti Kalaitzi, Kaidi Lõo, Marco Marelli, and Kerem Usal. 2022. Text reading in english as a second language: Evidence from the multilingual eye-movements corpus. Studies in Second Language Acquisition, pages 1–35.
Patrick Littell, David R. Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017a. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics:
Volume 2, Short Papers, pages 8–14. Association for Computational Linguistics.
Patrick Littell, David R. Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017b.
URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors.
In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8–14, Valencia, Spain. Association for Computational Linguistics.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. *Psychological bulletin*, 124(3):372.
Keith Rayner, Alexander Pollatsek, Jane Ashby, and Charles Clifton Jr. 2012. *Psychology of reading*. Psychology Press.
Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
2022. mLUKE: The power of entity representations in multilingual pretrained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7316–7330, Dublin, Ireland. Association for Computational Linguistics.
Leah Roberts and Anna Siyanova-Chanturia. 2013. Using eye-tracking to investigate topics in l2 acquisition and l2 processing. *Studies in Second Language Acquisition*, 35.
Oren Tsur and Ari Rappoport. 2007. Using classifier features for studying the effect of native language on the choice of written second language words. In Proceedings of the Workshop on Cognitive Aspects of Computational Language Acquisition, pages 9–16, Prague, Czech Republic. Association for Computational Linguistics.
Rob van der Goot, Ahmet Üstün, Alan Ramponi, Ibrahim Sharaf, and Barbara Plank. 2021. Massive choice, ample tasks (MaChAmp): A toolkit for multitask learning in NLP. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 176–197, Online. Association for Computational Linguistics.
## A Results By Berzak Et Al. (2017)
| Shared regime | | | |
|------------------------------|----------|-----------|-------|
| Majority Class | 25.52 | | |
| Random Clusters | 22.76 | | |
| unigrams | +bigrams | +trigrams | |
| Information Clusters (IC) | 41.38 | 44.14 | 46.21 |
| Syntactic Clusters (SC) | 45.52 | 57.24 | 58.62 |
| Information Clusters (IC) | 51.72 | 57.24 | 60.0 |
| Words in Fixed Context (WFC) | 64.14 | 68.28 | 71.03 |
Table 3: Native Language Identification from Reading results by Berzak et al. (2017)
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
## B Obtaining Syntactic Annotations
We trained a multi-task MaChAmp model (van der Goot et al., 2021), including UPOS, PTB POS,
lemmatization, morphological tagging, and dependency parsing. We used MaChAmp v0.4 with default settings, trained on the English Web Treebank v2.11 (because it has PTB tags and is English). It uses the combined (summed cross-entropy) loss of all tasks. We do not use the morphological tags and lemmas but include them for future work. All default hyperparameters are used and the default dev-split is used for model picking. We first ran the parser on the untokenized input but noticed that it quite commonly outputs the PUNCT label and corresponding relations to (end-of-sentence) words that have punctuation attached.
So we pre-split using the BasicTokenizer from huggingface (which only separates punctuations) and use the labels of the words for the combined string. We compared mBERT (Devlin et al., 2019) with XLM-R
Large (Conneau et al., 2020) and MLUKE (Ri et al., 2022). We compared their outputs on the MECO
dataset manually and found the best performance with the XLM-R Large model (although MLUKE gets higher accuracies on EWT-dev).
## C Limitations
The MECO dataset (Kuperman et al., 2022) is recorded at different labs following the same strict protocol.
Nevertheless, location and experimenter effects may be confounding factors for the NLIR task. The CELER data (Berzak et al., 2022), used by (Berzak et al., 2017), seems to all be recorded at the same lab. Since we confirm their hypothesis, we do not see this as a fatal flaw in our study. There is no other available dataset that would allow us to replicate their finding. |
cui-etal-2023-medtem2 | {M}ed{T}em2.0: Prompt-based Temporal Classification of Treatment Events from Discharge Summaries | https://aclanthology.org/2023.acl-srw.27 | Discharge summaries are comprehensive medical records that encompass vital information about a patient{'}s hospital stay. A crucial aspect of discharge summaries is the temporal information of treatments administered throughout the patient{'}s illness. With an extensive volume of clinical documents, manually extracting and compiling a patient{'}s medication list can be laborious, time-consuming, and susceptible to errors. The objective of this paper is to build upon the recent development on clinical NLP by temporally classifying treatments in clinical texts, specifically determining whether a treatment was administered between the time of admission and discharge from the hospital. State-of-the-art NLP methods including prompt-based learning on Generative Pre-trained Transformers (GPTs) models and fine-tuning on pre-trained language models (PLMs) such as BERT were employed to classify temporal relations between treatments and hospitalisation periods in discharge summaries. Fine-tuning with the BERT model achieved an F1 score of 92.45{\%} and a balanced accuracy of 77.56{\%}, while prompt learning using the T5 model and mixed templates resulted in an F1 score of 90.89{\%} and a balanced accuracy of 72.07{\%}.Our codes and data are available at \url{https://github.com/HECTA-UoM/MedTem}. | # Medtem2.0: Prompt-Based Temporal Classification Of Treatment Events From Discharge Summaries
Yang Cui, Lifeng Han, and **Goran Nenadic**
Department of Computer Science The University of Manchester Oxford Rd, Manchester M13 9PL, UK
[email protected] lifeng.han, [email protected]
## Abstract
Discharge summaries are comprehensive medical records that encompass vital information about a patient's hospital stay. A crucial aspect of discharge summaries is the temporal information of treatments administered throughout the patient's illness. With an extensive volume of clinical documents, manually extracting and compiling a patient's medication list can be laborious, time-consuming, and susceptible to errors. The objective of this paper is to build upon the recent development on clinical NLP by temporally classifying treatments in clinical texts, specifically determining whether a treatment was administered between the time of admission and discharge from the hospital. State-of-the-art NLP methods including prompt-based learning on Generative Pretrained Transformers (GPTs) models and finetuning on pre-trained language models (PLMs)
such as BERT were used to classify temporal relations between treatments and hospitalisation periods in discharge summaries. Fine-tuning with the BERT model achieved an F1 score of 92.45% and a balanced accuracy of 77.56%,
while prompt learning using the T5 model and mixed templates resulted in an F1 score of 90.89% and a balanced accuracy of 72.07%.
Our codes and data are available at https:
//github.com/HECTA-UoM/MedTem.
## 1 Introduction
Clinical texts contain important temporal information, such as medication start and end dates, appointment dates, and diagnosis dates. Extracting this information can provide insights into a patient's medical history and allow doctors to make more informed decisions about their treatment.
However, this process requires a significant amount of time and effort. To help healthcare professionals make informed decisions more efficiently, leading to better patient outcomes, we designed the project MedTem, medication and treatment event extraction and their relation modelling with temporal information. By using natural language processing (NLP) methods to extract temporal information from clinical texts, doctors can spend less time deciphering medical records and more time focusing on providing the best care possible to their patients.
This study reports findings from MedTem2.0, a follow-up work from our previous investigation MedTem (Tu, 2022).
Clinical texts can be challenging to process due to their unstructured nature and the use of medical jargon. Thus, developing effective NLP techniques for extracting temporal information from clinical texts is crucial for improving healthcare outcomes. The primary goal of this work is to classify temporal information related to medication, surgeries, and other treatments within Electronic Health Records (EHRs) to determine if these treatments occurred during the hospitalisation period.
This work aims to develop a system capable of classifying temporal information using prompt-based learning (PBL) from texts, which could aid healthcare professionals in understanding patients' medical histories and facilitate research in clinical text mining.
As an example, in Table 1, given the admission and discharge dates, we aim to determine if the a left carotid endarterectomy and *vein patch angioplasty* were used during the hospitalisation period.
The note indicates that those treatments were administered on 3/3/92, which is during the admission and discharge dates, suggesting that it was used during hospitalisation. We assume that all treatment information is provided and only need to analyse the temporal information.
To the best of our knowledge, this is the first attempt at using prompt-based learning for the temporal classification of treatments in the clinical domain, with the following outcomes: 1) we established a high baseline score with 90.89% F1 measurement and 72.07% balanced accuracy by using prompt-based learning, demonstrating the 160
| clinical free text | | |
|----------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Admission Date | Discharge Date | Doctor's Note |
| 02/22/92 | 03/08/92 | She was, therefore, cleared for the operating room, and on 3/3/92, she underwent a left carotid endarterectomy, with continuous electroencephalogram monitoring and vein patch angioplasty, which was uneventful . |
effectiveness of the developed system for classifying temporal relationships between treatments and hospitalisation times; 2) we achieved improved performance using fine-tuning with the BERT model, resulting in a 92.45% F1 score and 77.56% balanced accuracy.
## 2 Methodologies 2.1 Task Overview
The pipeline shown in Figure 1 presents the methodology. The key approaches entail deriving gold labels from annotated datasets, following several pre-processing steps such as few-shot learning and sentence segmentation, among others.
To evaluate the efficacy of prompt-based learning in temporally classifying treatment entities, two widely-adopted paradigms were used for comparison: pre-trained fine-tuning and prompt-based learning. Within these paradigms, three state-ofthe-art pre-trained language models were used to perform the task: the Masked Language Model BERT, Seq2seq model T5 and Auto-regressive Language Model GPT-2 (Devlin et al., 2018; Raffel et al., 2020; Radford et al., 2019). All these models are based on Transformer structures but with different architecture/components, BERT for the encoder, GPT for the decoder, and T5 for both the encoder and decoder. We used BERT-base instead of BERT-large because the latter one costs too much power that the Colab platform we used could not afford.
## 2.2 Data Pre-Processing Step I: Generation Of Gold Standard The I2B2
temporal relations corpus we used contains preexisting layers of gold standard annotations, such as clinical concepts (problems, tests, treatments)
and coreference relations (Uzuner et al., 2012, 2011), which can facilitate temporal reasoning.
In each discharge note, there are three types of annotations: events, temporal expressions, and temporal relations. Event annotations (EVENTs) encompass three distinct clinical concepts (i.e. PROBLEMs, TESTs, and TREATMENTs), clinical departments, EVIDENTIALs (words or phrases patients use to describe their symptoms), and OCCURRENCEs (other events, such as admission, that indicate the patient's timeline). Each EVENT
possesses three attributes: TYPE, MODALITY, and POLARITY. For this specific task, we only need to identify the TYPE of EVENT as TREATMENT and OCCURRENCE among all the TYPE attributes (PROBLEM, TEST, TREATMENT, CLINICAL_DEPT, EVIDENTIAL, or OCCURRENCE). Figure 2 shows the discharge summary paragraph; the EVENTs in this record are shown in Table 2.
In clinical records, the temporal expression annotations use the TIMEX3 tag, which includes four categories: time, date, duration, and frequency.
Each TIMEX3 value (VAL) is standardised to a unified format, such as time and date being represented as [YYYY-MM-DD]T[HH:MM]. Additionally, the MOD attribute indicates the characteristics of the temporal expression. Table 3 shows the TIMEX3 in the sample clinical record snippet. Once we have acquired all the EVENT and TIMEX3 information, we can map the temporal relations (TLINKs) between time and events, or between events themselves (Table 4). The TLINK categories include BEFORE, AFTER, BEGUN_BY, ENDED_BY,
DURING, SIMULTANEOUS, OVERLAP, and BEFORE_OVERLAP.
Upon identifying all the treatment EVENTs and their relationships with admission and discharge times, we assign a label of "ON" to those entities where treatment occurs after or overlaps with the admission time and is also before or overlaps with the discharge time, indicating that the treatment was administered during hospitalisation. Conversely, we assign a label of "OFF" to the remaining treatments, signifying that they were not used during hospitalisation. Figure 3 illustrates the application of this rule-based approach for generating
![2_image_0.png](2_image_0.png)
Admission Date : 06/11/1991 Discharge Date : 06/22/1991 HISTORY OF PRESENT ILLNESS :
Patient is a 28 year old gravida IV , para 2 with metastatic cervical cancer admitted with a question of malignant pericardial effusion . Patient underwent a total abdominal hysterectomy in 02/90 for a 4x3.6x2 cm cervical mass felt to be a fibroid at Vanor .
the necessary gold labels. These gold labels comprise the document name, discharge note, treatment entity, and the label. In this study, the provided dataset consists of a training dataset and a testing dataset. After processing the data using the gold label generator as above, we obtained 3,075 ON-
labelled training samples (indicating treatments used during hospital stays) and 762 OFF-labelled samples (indicating treatments not used during hospital stays). This results in an imbalanced label set on the dataset.
Step II: Few-shot Learning to Balance Labels To address the label-imbalance issue, we used a few-shot learning approach to create a balanced training dataset. This involved randomly selecting an equal number of samples from each label and combining them to form the few-shot training dataset.
Furthermore, most notes contain numerous abbreviations, such as "mcg subq q.d.", which stands for "micrograms subcutaneously once daily". However, since our objective is to analyse temporal information related to treatments, addressing dosage and frequency abbreviations is not necessary.
Step III: Sentence Segmentation Due to the nature of the dataset, which consists of clinical discharge notes, doctors frequently use brief sentences or even short phrases to describe various treatments, tests, or other patient-related information. This characteristic simplifies the process of Sentence Segmentation , which can be achieved by splitting the text based on newline characters ("/n")
and periods ("."). The rationale behind sentence segmentation is to preserve and enhance the extraction of contextual information within the text, as distinct sentences often address different topics or aspects.
Step IV: Sentence Window An interesting aspect is that a single treatment may be mentioned multiple times in one clinical note, each referring to different events with distinct time sequences.
Providing the entire text as input data would be imprecise and inaccurate. Additionally, clinical notes predominantly consist of factual statements and clinical declarations, with sentences generally
| Event | Type | Modality | Polarity |
|----------------------------------|---------------|------------|------------|
| [Admission] | OCCURRENCE | FACTUAL | POS |
| [Discharge] | OCCURRENCE | FACTUAL | POS |
| [gravida IV] | OCCURRENCE | FACTUAL | POS |
| [metastatic cervical cancer] | PROBLEM | FACTUAL | POS |
| [malignant pericardial effusion] | PROBLEM | POSSIBLE | POS |
| [a total abdominal hysterectomy] | TREATMENT | FACTUAL | POS |
| [a fibroid] | PROBLEM | POSSIBLE | POS |
| [Vanor] | CLINICAL_DEPT | FACTUAL | POS |
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
![3_image_2.png](3_image_2.png)
being independent. As a result, we used a **Sentence**
Window approach to extract valuable information.
For instance, if the target treatment entity is in the target sentence, and the sentence window size is set to 4, the model selects two sentences before and after the target sentence. The input data consists of the target sentence, its surrounding sentences, and the key temporal information of admission and discharge times, which appear at the beginning of every clinical note. Thus, this approach ensures that the model incorporates relevant temporal information and context.
Step V: Tokenization Tokenization is a crucial step in the natural language processing pipeline, wherein paragraphs are segmented into sentences, and sentences are further broken down into individual tokens or words (Koehn, 2009). This process enables the conversion of unstructured textual data into a structured, word-based data format, facilitating subsequent processing and analysis. By transforming unstructured data into structured data, we can represent textual information as vectors, and tokenization serves as the foundational step in this transformation.
In prompt-based learning, designing a template that includes an input sequence and prompting sentence is essential. However, creating a tokenizer for this purpose can be time-consuming and prone to errors. This is due to the presence of specific information, such as masked tokens or auto-generated tokens, embedded in the template, which requires careful handling during tokenization. Any mismatches in masked tokens can result in serious consequences. Furthermore, different PLMs may have distinct architectures, leading to varying tokenization strategies, necessitating consistency in context processing.
## 2.3 Prompt-Based Learning Vs Fine-Tuning
In conventional supervised learning for NLP, the objective is to predict an **output y** based on an input x utilising the model P(y|x; θ) (Manning and Schutze, 1999). In classification tasks, y denotes the class label corresponding to **input x**. To train the model's parameters θ, a dataset consisting of input-output pairs is required for predicting this conditional probability (Goodfellow et al.,
2016). However, obtaining adequately annotated
(labelled) data for certain domains can be challenging. Prompt learning methods address this limitation by learning a language model (LM) that estimates the probability P(x; θ) of the text x itself. Consequently, this probability is used to predict y, thereby bypassing the need for extensive labelled datasets (Liu et al., 2023; Ding et al., 2021). There
| From extent | Type | To extent |
|----------------------------------|--------------|-----------------------|
| [Admission] | SIMULTANEOUS | [06/11/1991] |
| [Discharge] | SIMULTANEOUS | [06/22/1991] |
| [gravida IV] | BEFORE | [SECTIME: 06/11/1991] |
| [para 2] | BEFORE | [SECTIME: 06/11/1991] |
| [para 2] | OVERLAP | [gravida IV] |
| [...] | ... | [...] |
| [a total abdominal hysterectomy] | BEFORE | [SECTIME: 06/11/1991] |
will be three main steps of doing that including prompt construction, answer selection, and answer mapping (refer to Appendix C.1).
We used **OpenPrompt**, a toolkit for implementing prompt learning in downstream tasks (Ding et al., 2021). It offers a function for loading PLMs, tokenizers, and other required configurations, which function accommodates the choice of PLMs (MLM, LM, and Seq2seq) and conducts tokenization accordingly. Designed with encapsulated data processing APIs, users can apply a humanreadable style to create templates and conveniently operate on both the input and template simultaneously.
To identify the optimal prompt format for this task, we examine various components in the prompt-based construction. We explore different large langauge model (LLM) architectures, and adjust the template's structure and format within the prompt construction. We modify the answer's form in answer selection to correspond with the chosen template.
In this context, we will first define the templates and verbalizers used within the framework and our experiments. We refer to the traditional promptbased learning approach that uses human designed templates and verbalizers as *manual templates* and manual verbalizers respectively. This strategy was initially introduced as Pattern-Exploiting Training
(PET) by Schick and Schütze (Schick and Schütze, 2020).
Manual Template Creating manual components in prompt learning can be quite intricate, as slight modifications to the tokens can lead to significant changes in performance. Domain expertise is typically required for effective engineering of these components. Examples of manual template can be a statement or question-answering format.
The **Soft Template** (Example 1) approach shares similarities with the manual method but replaces fixed manual components with soft (trainable) tokens or embeddings, denoted as <[soft]>. Combining some fixed manual components with soft tokens leads to the **Mixed Template** approach (Example 2), which uses both fixed and trainable elements in the template construction.
Listing 1: Example of Soft Template text = '<[clinical_record]> <[soft]>
<[treatment]> <[soft]> <[soft]> <[mask]> <[soft]>.'
Listing 2: Example of Mixed Template text = '<[clinical_record]> Question:
<[treatment]> <[soft]> <[soft]>
<[soft]> <[soft]> <[soft]>. Is it correct? <[mask]>'
Leveraging the T5 model's encoder-decoder architecture, we can generate variable-length output sequences based on the input sequence.
With this advantage, the PLM can generate part of the prompt within the manual template.
Choosing to sacrifice human interpretability, one can create soft prompt components instead.
A typical mixed template takes the form x0 =
[P0, P1, . . . , Pj ], x, [Pj+1, Pj+2*, . . . , P*k], [MASK],
where for i ∈ 0, 1*, . . . , k*, Pi represents the token of the template.
Verbalizer The verbalizer functions as a mechanism that maps single or multiple distinct tokens to well-defined class labels. The embedding or hidden state associated with the < [MASK] >
position, generated by the PLM, is subsequently processed through a standard language model head or classifier. This step computes the probabilities connected to the class label tokens derived from the verbalizer. In this task, a **Manual Verbalizer** was used, which entailed manually constructing a list of answers. These answers can be either token-based or span-based, depending on the specific template
![5_image_0.png](5_image_0.png)
## Used.
In a similar fashion to the soft template, a Soft Verbalizer can be conceptualised as replaced words in the verbalizer with trainable embeddings for each class. As a result, when useing a soft verbalizer, there is no necessary to establish a mapping from vocabulary V to class C, as the trainable vectors lack semantic meaning.
## 2.4 Traditional Fine-Tuning
In traditional fine-tuning methodology, the downstream task uses a multilayer perceptron (MLP)
denoted as fMLP (·). This MLP takes the pooled sequence embedding generated by the PLM as input and delivers an n-dimensional vector, where n represents the numeral of classes (Kowsari et al.,
2019). Given an input text x, the PLM first processes the raw input to obtain the m-dimensional embedding for each token. Next, a pooling process, such as the mean, is involved in all the token's embeddings to generate a single sequence embedding h(x) with the same m-dimensional size. The sentence embedding h(x) is then fed into the MLP
block through a typical feed-forward process to obtain the likelihood distribution across n classes using a softmax operator.
Figure 4, 5, and 6 illustrate the examples of PBL
and PLM fine-tuning on our task, adapted from
(Taylor et al., 2022).
## 2.5 Evaluation Methods
We take the label "ON" as the positive class and label "OFF" as the negative class. In addition to F1 score, we used balanced accuracy as a performance measure for our model, which calculates the average recall across all classes. The decision to use balanced accuracy instead of overall accuracy stems from the imbalanced distribution of class labels in the test dataset, with 3164 instances of label "ON" and 921 instances of the label "OFF".
Balanced accuracy considers the performance of the model on each class individually, thus avoiding potential misinterpretations that can arise from using overall accuracy when one class is substantially more prevalent than the other.
## 3 Experimental Work 3.1 Dataset
In this project, we use electronic health records
(EHRs) from the National NLP Clinical Challenges
(n2c2, formerly known as i2b2) dataset, which is part of an annual challenge workshop 1. We primarily focus on the 2012 n2c2 challenge (Sun et al.,
2013b), which is centred around temporal relations.
The dataset consists of 310 patient clinical history records and hospital course sections from Partners Healthcare and Beth Israel Deaconess Medical Center, along with clinical events, time expressions, and temporal relationship annotations (Sun et al.,
2013a). For ethical reasons and to protect patient privacy, the data has been de-identified and abstracted, including the obfuscation or alteration of names, addresses, and other personal information.
Additionally, accurate time information has been randomly shifted.
1https://n2c2.dbmi.hms.harvard.edu/
about-n2c2
![6_image_0.png](6_image_0.png)
## 3.2 Output From Prompt-Based Learning
We adopt a systematic approach to optimise the performance of different PLMs. Initially, we use various PLMs by the full training dataset, basic manual templates, and verbalizers, while fixing the sentence window for input text and adjusting the learning rate to identify the optimal performance for each model. Comparing the results, we will determine the best-performing PLM at this stage.
Next, with the best PLM and fixed sentence window, we will train the model using the full dataset while varying templates and verbalizers to identify the most effective template. Furthermore, we will maintain the best PLM and template while altering the sentence window to assess the impact of input text on performance.
Upon completing the hyperparameter selection for prompt-based learning, we will obtain the bestperforming model. Finally, we will use few-shot learning to compare this model with the fine-tuning paradigm.
## 3.2.1 Different Language Models
To evaluate the performance of various models, we use a combination of admission and discharge information along with three sentences that include the target sentence and the sentences immediately preceding and following it, where the target sentence contains the target treatment entity. Moreover, we use manual templates and verbalizers, with the template following a question-answering format. The verbalizer is set to a collection of words, specifically "Yes", "No". The entire training process spans 5 epochs.
![6_image_1.png](6_image_1.png)
1E-4 2E-4 5E-6 6E-5 2E-5 5E-6
![6_image_2.png](6_image_2.png)
| L.R. | F1.on | B.Accy. |
|-------------------|-------------------|-----------|
| 87.29 90.75 90.14 | 50 | |
| 69.72 69.57 | | |
| 90.57 90.79 90.28 | 70.24 71.19 65.58 | |
| 90.69 91.24 90.12 | 70.43 71.43 68.36 | |
Upon adjusting the learning rate for the various PLMs, several examples of results were obtained in table 5. The bold font indicates the highest score for each PLM. In fact, there was not a big difference between them. T5 is 1.71 and 0.24 higher than BERT and GPT-2 under balanced accuracy
![7_image_0.png](7_image_0.png)
respectively and held a 0.49 and 0.45 advantage in F1 score.
During the training process, we observed that all the results demonstrated a higher recall than precision, indicating that the model correctly identifies most of the true positive cases (with few false negatives). This situation can be attributed to the training data having a significantly larger number of positive examples compared to negative ones, which is also reflected in the testing dataset. Additionally, when examining the negative class accuracy, the models only achieve approximately 50%. This suggests that they are not proficient in detecting negative classes. However, when using a balanced training dataset, the negative class accuracy increases to 61%.
## 3.2.2 Different Prompt Learning Setups
In order to assess the effectiveness of different combinations of templates and verbalizers, we used a variety of templates in conjunction with both manual and soft verbalizers. For the manual template, we used a question-answering format, combined with a yes, no manual verbalizer and a soft verbalizer. Additionally, the soft template used Example 1 for prompting, with fixed and predefined positions and lengths for the soft tokens, and was combined with the same manual and soft verbalizers as the manual template. For the mixed template, we used Example 2 along with the same verbalizers as before. During the comparison of different prompt engineering approaches, we also experimented with various text lengths for each template category.
| Template | Verbalizer | F1.on | B.Accy. |
|------------|--------------|---------|-----------|
| Manual | Manual | 91.24 | 71.43 |
| Soft | 90.85 | 70.52 | |
| Soft | Manual | 90.68 | 68.33 |
| Soft | 89.8 | 72.48 | |
| Mixed | Manual | 90.89 | 72.07 |
| Soft | 90.7 | 69.01 | |
The evaluation results presented in Table 6 reveal that the (Manual, Manual) combination, with the format (Template, Verbalizer), achieves the highest F1 score of 91.24. This indicates its strong capability to classify "ON" class samples. Additionally, the (Soft, Soft) setup demonstrates the best balanced accuracy of 72.42, which is more suitable when the "OFF" class is as important as the positive class. We list error analysis examples and comparisons of different input text in Appendix (F). The
(Mixed, Manual) configuration showcases comparatively good results for both evaluation metrics and will be used as the standard for the next section of comparisons.
## 3.3 Pbl Vs Traditional Fine-Tuning
The Hyperparameters-optimised outputs from PBL
and traditional fine-tuning are displayed in Table
![8_image_0.png](8_image_0.png)
| Paradigm | F1 score | B.Accy. |
|-------------------------|------------|-----------|
| Traditional fine-tuning | 92.45 | 77.56 |
| Prompt-based learning | 91.79 | 75.08 |
## 4 Related Work
Early research in temporal relation classification focused on extracting and representing temporal information from clinical text. Hripcsak et al. (2002)
proposed a method for representing clinical events and their temporal relationships using an intervalbased temporal model, laying the groundwork for understanding temporal dependencies in clinical text.
Inspired by the TimeML standard (Pustejovsky et al., 2003) for annotating temporal expressions and relations in text, the THYME (Temporal Histories of Your Medical Events) annotation guidelines were developed by Styler IV et al. (2014) to adapt TimeML for clinical narratives. These guidelines provided a foundation for temporal relation classification research in the clinical domain. However, achieving temporal understanding in clinical narratives is challenging due to the complexity of determining implicit temporal relations, handling temporal granularity, and dealing with diverse temporal expressions.
## 5 Conclusion And Future Work
In this work, two state-of-the-art approaches were developed to classify the relative timing of treatments in hospital discharge summaries, focusing on determining whether a treatment was administered during hospitalisation or not. These approaches used cutting-edge pre-trained language models, BERT, GPT-2, and T5, in conjunction with prompt-based learning and fine-tuning paradigms.
Both approaches achieved F1 scores of 91.79%
and 92.45%, and balanced accuracy of 75.08% and 77.56%, respectively, on the n2c2 2012 Temporal Relations dataset. The primary challenge was accurately classifying the "OFF" class due to data imbalance and complex semantic meanings that made it difficult for the models to make correct decisions. Future work could investigate the impact of fixed tokens on mixed template performance or the role of longer sequence lengths in soft templates for improved understanding. Additionally, a more comprehensive comparison of prompt learning and traditional fine-tuning can be conducted across various clinical domain tasks, using frozen PLMs in conjunction with few-shot learning methods.
## Limitations
There are several limitations to the experiments conducted in this project that should be acknowledged:
- Selection of the best pre-trained language model (PLM) for prompt-based learning: The evaluation method used to compare the performance of BERT, GPT-2, and T5 in the context of manual templates and manual verbalizers may not be entirely accurate. The performance of these models did not show significant differences, making it difficult to determine the best model for prompt-based learning. Furthermore, other domain-specific PLMs, such as Bio-BERT, which may be better suited for handling clinical data, were not considered in this project.
- Limited exploration of templates: The experiments utilized a limited number of templates, particularly for soft and mixed templates. These templates were primarily based on prompts derived from manual templates.
Further experimentation is needed to explore different patterns, such as varying the position and length of soft token sequences or using soft tokens in mixed templates to replace manual tokens (e.g., "Question:").
- Comparison with frozen PLMs: The experiments did not include a comparison between fine-tuned and frozen PLMs, as done in Taylor's study (Taylor et al., 2022). This comparison could provide valuable insights into the performance trade-offs between these two approaches.
- Addressing the effects of imbalanced datasets, several strategies have gained popularity. 1)
Re-sampling techniques, for example, Monte Carlo Simulation Analysis, can be used to balance class distribution by oversampling the minority class, undersampling the majority class, or the combination of these two (Gladkoff et al., 2021). 2) Data augmentation techniques, such as the use of Generative Adversarial Networks (GANs), can generate new examples for the minority class by applying transformations to existing data. 3) Furthermore, machine learning approaches like bagging and bootstrapping can reduce variances
by implementing a "voting system" that enables models to make better decisions.
- Finally, it would be advantageous to develop a post-processing step that generates a table displaying all treatments along with their corresponding temporal information. This would create an end-to-end system that physicians could use as a practical tool.
Future research should address these limitations by exploring a broader range of PLMs, templates, and experimental setups to provide a more comprehensive understanding of the performance characteristics of prompt-based learning methods in the clinical domain. Application to some more powerful computational resources will also extend this work.
## Ethical Discussion
The n2b2 (formerly i2b2) 2012 Temporal Relations dataset was used for the development of the approach in this project. This dataset comprises patient-level data in the form of discharge summaries. These documents have been de-identified in accordance with the Health Insurance Portability and Accountability Act of 1996 privacy regulations by the organizers of the n2c2 2012 NLP challenge
(Act, 1996). The dataset was obtained with permission for academic use only after signing a Data Use and Confidentiality Agreement with the n2c2 National Center for Biomedical Computing. So no further ethical approval forms were required to gain access to the dataset.
## Acknowledgements
We thank the reviewers for their precious comments on making our paper better. The work was partially supported by Grant EP/V047949/1 "Integrating hospital outpatient letters into the healthcare data space" (funder: UKRI/EPSRC).
## References
Accountability Act. 1996. Health insurance portability and accountability act of 1996. *Public law*, 104:191.
Akiko Aizawa. 2003. An information-theoretic perspective of tf–idf measures. Information Processing &
Management, 39(1):45–65.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Wendy W Chapman, Will Bridewell, Paul Hanbury, Gregory F Cooper, and Bruce G Buchanan. 2001. A
simple algorithm for identifying negated findings and diseases in discharge summaries. *Journal of biomedical informatics*, 34(5):301–310.
William A Chren. 1998. One-hot residue coding for low delay-power product cmos design. IEEE Transactions on circuits and systems II: Analog and Digital Signal Processing, 45(3):303–313.
Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Richard S Dick, Elaine B Steen, Don E Detmer, et al.
1997. The computer-based patient record: an essential technology for health care.
Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, and Maosong Sun.
2021. Openprompt: An open-source framework for prompt-learning. *arXiv preprint arXiv:2111.01998*.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017.
Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pages 1126–1135. PMLR.
Serge Gladkoff, Irina Sorokina, Lifeng Han, and Alexandra Alekseeva. 2021. Measuring uncertainty in translation quality evaluation (tqe). arXiv preprint arXiv:2111.07699.
Yoav Goldberg and Omer Levy. 2014. word2vec explained: deriving mikolov et al.'s negativesampling word-embedding method. *arXiv preprint* arXiv:1402.3722.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville.
2016. *Deep learning*. MIT press.
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang.
2021. Ppt: Pre-trained prompt tuning for few-shot learning. *arXiv preprint arXiv:2109.04332*.
Aaron Li-Feng Han, Xiaodong Zeng, Derek F Wong, and Lidia S Chao. 2015. Chinese named entity recognition with graph-based semi-supervised learning model. In *Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing*, pages 15–20.
Lifeng Han, Gleb Erofeev, Irina Sorokina, Serge Gladkoff, and Goran Nenadic. 2022. Examining large pre-trained language models for machine translation:
What you don't know about it. In *Proceedings of the* Seventh Conference on Machine Translation (WMT),
pages 908–919, Abu Dhabi, United Arab Emirates
(Hybrid). Association for Computational Linguistics.
Jerry R Hobbs, Douglas Appelt, David Is Bear, and Mabry Tyson. 1997. Extracting information from natural-language text. *Finite-state language processing*, page 383.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735–
1780.
George Hripcsak, John HM Austin, Philip O Alderson, and Carol Friedman. 2002. Use of natural language processing to translate clinical information from a database of 889,921 chest radiographic reports. *Radiology*, 224(1):157–163.
Philipp Koehn. 2009. *Statistical machine translation*.
Cambridge University Press.
Kamran Kowsari, Kiana Jafari Meimandi, Mojtaba Heidarysafa, Sanjana Mendu, Laura Barnes, and Donald Brown. 2019. Text classification algorithms: A survey. *Information*, 10(4):150.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining.
Bioinformatics, 36(4):1234–1240.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
ACM Computing Surveys, 55(9):1–35.
Christopher Manning and Hinrich Schutze. 1999. *Foundations of statistical natural language processing*.
MIT press.
Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering.
arXiv preprint arXiv:1806.08730.
Erwan Moreau, Ashjan Alsulaimani, Alfredo Maldonado, Lifeng Han, Carl Vogel, and Koel Dutta Chowdhury. 2018. Semantic reranking of crf label sequences for verbal multiword expression identification.
James Pustejovsky, José M Castano, Robert Ingria, Roser Sauri, Robert J Gaizauskas, Andrea Setzer, Graham Katz, and Dragomir R Radev. 2003. Timeml:
Robust specification of event and temporal expressions in text. *New directions in question answering*,
3:28–34.
Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey.
Science China Technological Sciences, 63(10):1872–
1897.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training.
Hangyu Tu. 2022. *Extraction of Temporal Information* from Clinical Free Text. MSc. Thesis, The University of Manchester.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Özlem Uzuner, Andreea Bodnari, Shuying Shen, Tyler Forbush, John Pestian, and Brett R South. 2012. Evaluating the state of the art in coreference resolution for electronic medical records. *Journal of the American* Medical Informatics Association, 19(5):786–791.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *The Journal of Machine Learning Research*,
21(1):5485–5551.
Özlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text.
Journal of the American Medical Informatics Association, 18(5):552–556.
Ellen Riloff. 1996. Automatically generating extraction patterns from untagged text. In *Proceedings of the* national conference on artificial intelligence, pages 1044–1049.
Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, et al. 2018. Tensor2tensor for neural machine translation. *arXiv preprint arXiv:1803.07416*.
Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few shot text classification and natural language inference. *arXiv preprint* arXiv:2001.07676.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Chaitanya Shivade, Preethi Raghavan, Eric FoslerLussier, Peter J Embi, Noemie Elhadad, Stephen B
Johnson, and Albert M Lai. 2014. A review of approaches to identifying patient phenotype cohorts using electronic health records. *Journal of the American Medical Informatics Association*, 21(2):221–230.
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. arXiv preprint arXiv:1909.03546.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism.
arXiv preprint arXiv:1909.08053.
Yuping Wu, Lifeng Han, Valerio Antonini, and Goran Nenadic. 2022. On cross-domain pre-trained language models for clinical text mining: How do they perform on data-constrained fine-tuning? arXiv preprint arXiv:2210.12770.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642.
William F Styler IV, Steven Bethard, Sean Finan, Martha Palmer, Sameer Pradhan, Piet C De Groen, Brad Erickson, Timothy Miller, Chen Lin, Guergana Savova, et al. 2014. Temporal annotation in the clinical domain. *Transactions of the association for computational linguistics*, 2:143–154.
Weiyi Sun, Anna Rumshisky, and Özlem Uzuner. 2013a.
Annotating temporal information in clinical narratives. *Journal of biomedical informatics*, 46:S5–S12.
Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013b.
Evaluating temporal relations in clinical text: 2012 i2b2 challenge. Journal of the American Medical Informatics Association, 20(5):806–813.
Niall Taylor, Yi Zhang, Dan Joyce, Alejo NevadoHolgado, and Andrey Kormilitzin. 2022. Clinical prompt learning with frozen language models. *arXiv* preprint arXiv:2205.05535.
## A Background And More Literature
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In *International Conference on Machine Learning*, pages 11328–11339. PMLR.
In this section, We introduce some key concepts and then explore the methods and techniques used in clinical text mining, with a particular focus on temporal classification (Tu, 2022). We will begin by examining the fundamentals of clinical text mining and its applications in healthcare, followed by an in-depth discussion on the challenges associated with temporal event extraction and classification.
Next, we will delve into the recent developments in prompt-based learning and its potential to revolutionise the field of clinical text mining, including its ability to handle diverse NLP tasks with a unified framework.
Our objective is to provide a comprehensive overview of the current landscape of clinical text mining in the context of temporal classification, emphasising the emerging role of prompt-based learning and its potential to drive further innovation and improvement in healthcare research and practice.
## A.1 Temporal Classification From Ehrs
Electronic Health Records (EHRs) have evolved from the concept of Computer Patient Records
(CPR) proposed by the Institute of Medicine in 1991 (Dick et al., 1997). Temporal relation classification of clinical events is crucial in understanding the chronological sequence and dependencies of events within electronic health records (EHRs). Extracting and analysing temporal information from EHRs can enhance our comprehension of disease progression, treatment efficacy, and patient risk factors, ultimately leading to improved healthcare outcomes.
## A.2 Related Nlp Applications
Rule-based methods in NLP involve using a predefined set of linguistic rules, patterns, or heuristics to process and analyse text. These rules are often developed by domain experts or linguists, reflecting the inherent structure and patterns present in the language. For instance, in Named Entity Recognition (NER) tasks, rule-based approaches can identify proper names, organisations, and locations using regular expressions (Hobbs et al., 1997),
which often target words starting with a capital letter. And Chapman (Chapman et al., 2001) proposes a rule-based algorithm designed for detecting negated concepts in clinical text. The advantages of rule-based methods include their speed and the lack of requirement for extensive computational resources.
However, rule-based methods have many limitations such as low recall (Riloff, 1996). In certain domains, only experts can develop effective rules.
Changes in the data source might render existing rules ineffective. Moreover, rule-based methods can be challenging to apply in temporal classification tasks involving free text, due to the absence of a standard format and the diverse and varied language expressions.
Statistical sequence models are particularly wellsuited for language processing tasks due to their ability to handle variable-length sequences, such as sentences. CRFs have been widely used in sequence labelling tasks such as part-of-speech tagging, information extraction, and named entity recognition (NER) (Moreau et al., 2018; Han et al.,
2015). In clinical domain, Shivade et al. (2014)
used a combination of HMMs and CRFs for clinical named entity recognition (NER) tasks. They used these methods to identify medical concepts such as medications, dosages, and durations from clinical text. Their results demonstrated that HMMs and CRFs could effectively recognize medical concepts, with CRFs outperforming HMMs in most cases.
Before the advent of word embeddings, researchers primarily used statistical techniques like one-hot encoding (Chren, 1998) and TF-IDF
(Aizawa, 2003) to represent words based on their frequency of occurrence in the text. This led to the creation of large, sparse vectors for word representation. The introduction of Word2Vec (Goldberg and Levy, 2014) offered several advantages, including lower-dimensional, dense, and continuous vectors that captured semantic similarity between words based on their co-occurrence with other words.
With the development of hardware capabilities, large neural networks have become feasible, which allows the exploration of deep learning architectures that can discover hidden features and automatically learn representations from the input in an end-to-end structure, mostly via the encoderdecoder style (Goodfellow et al., 2016). Collobert and Weston (2008) first introduced temporal convolutional neural networks (CNNs) for named entity recognition (NER) tasks. To model long sequences, Hochreiter and Schmidhuber (1997) proposed the long short-term memory (LSTM) model based on the architecture of recurrent neural networks (RNNs), addressing the challenge of capturing long-distance historical information and mitigating the vanishing gradient problem faced by RNNs.
Tu (2022) used a combination of Bidirectional Long Short-Term Memory (BiLSTM) and Conditional Random Fields (CRF) to perform Named Entity Recognition (NER) tasks on a clinical dataset.
The model achieved a weighted average accuracy of 0.98 and a macro-averaging score of 0.69. Additionally, they explored the use of a Convolutional Neural Network (CNN) with BiLSTM, resulting in improved performance compared to the BiLSTM+CRF model. This hybrid model demonstrated a precision of 85.67%, recall of 87.83%,
## A.3 Recent Large Language Models A.3.1 Pre-Trained Language Models
The development of the Transformer architecture by Vaswani et al. (2017) brought NLP to a new stage with its self-attention mechanism, which enhances the model's ability to capture long-range dependencies among words in the input sequence.
Pre-trained language models like BERT, GPT, and T5, which are based on the Transformer architecture, have achieved state-of-the-art performance on numerous tasks. These models learn contextualised word representations, different from traditional word representations (e.g., Word2Vec, GloVe), which map words to fixed-length vectors and assume words in similar contexts have similar meanings. In contrast, pre-trained models learn context-dependent representations, capturing contextual information more effectively (Qiu et al., 2020). This process allows models to better "understand" language, context, and words.
## A.3.2 Fine-Tuning Paradigm
Fine-tuning has been the traditional approach for adopting pre-trained language models (PLMs) to specific tasks. This is usually done by task-specific layers or heads on top of the pre-trained model and adjusting the model's weights through backpropagation (Wu et al., 2022). It has achieved state-of-the-art results in many NLP tasks, such as sentiment analysis (Socher et al., 2013), named entity recognition (Wadden et al., 2019) and machine translation (Vaswani et al., 2018; Han et al.,
2022). However, it requires lots of training data, which may not be available in certain scenarios, and to fine-tuning a model can be computationally expensive.
Fine-tuning From 2017 to 2019, there was a paradigm shift in NLP model learning, with researchers moving away from fully supervised methods and increasingly adopting the pre-training and fine-tuning paradigm. This approach uses a fixed architecture pre-trained language model (PLM) to predict the probability of observed textual data.
The PLM is adapted to different downstream tasks by fine-tuning additional parameters using objective functions specific to each task. For instance, Zhang et al. (Zhang et al., 2020) introduced a loss function for predicting salient sentences, and when combined with PLMs and fine-tuning, it resulted in state-of-the-art performance on various popular datasets and tasks (Devlin et al., 2018).
However, the fine-tuning approach is most suitable when large-scale text data is available for optimising the objective function, which is not always feasible in certain domains. In the case of clinical records, data privacy issues and the need for clinical experts to annotate data for training make it difficult to produce large open clinical datasets.
For example, BERT models trained on non-medical text tend to perform poorly when applied to medical domain tasks (Lee et al., 2020; Wu et al., 2022).
Additionally, each specific task requires its own fine-tuning process, and as the NLP field continues to increase model sizes to improve performance
(e.g., Microsoft's Megatron (Shoeybi et al., 2019)
with 530 billion parameters), full or partial finetuning of these massive models demands considerable computational, financial resources, and time
(Han et al., 2022). These concerns have led to the emergence of a new paradigm called prompt-based learning, which aims to achieve strong performance across a wide range of applications without the need for extensive fine-tuning.
## A.3.3 Few-Shot Learning
Few-shot learning is an area of machine learning that focuses on training models to recognize or generalize new concepts with very limited labelled examples. This approach aims to alleviate the need for large amounts of labelled data, which can be costly and time-consuming to obtain. The few-shot learning problem is typically framed in terms of episodes, where each episode consists of a small support set and a query set. The support set contains a few labelled examples of each class, while the query set comprises unlabelled examples from the same classes. The goal is to learn a model that can accurately classify the query set instances based on the limited information provided in the support set. Finn et al. (Finn et al., 2017) proposed MAML, a meta-learning algorithm that learns an optimal initialisation of model parameters, enabling rapid adaptation to new tasks with few gradient updates.
## A.3.4 Prompt-Based Learning Paradigm
Prompt-based learning is a recent paradigm in NLP that leverages pre-trained language models (PLMs) like GPT-3 (Brown et al., 2020) to perform various tasks without the need for fine-tuning. This approach involves using carefully designed prompts or templates that guide the PLM to generate desired outputs based on the input context. Moreover, this approach is especially useful in situations with limited task-specific training data, as it does not require retraining the entire model, however, crafting effective prompts for specific tasks can be challenging and may require manual engineering or iterative search procedures. It gives me the inspiration to construct a fine prompt learning and challenge with more traditional fine-tuning methods.
Prompt-based learning emerged with the advent of models like T5 and GPT-3, as researchers discovered that pre-trained language models (PLMs)
could be effectively guided by textual prompts in low-data scenarios. The T5 model innovation suggested that PLMs possess strong language understanding capabilities, and by providing appropriate instructions or prompts, they can adapt to various tasks (Liu et al., 2023). This approach, dubbed "pretrain, prompt, and predict" or prompt-based learning, revolves around prompt engineering, which tailors prompts to suit different downstream tasks.
For instance, given the sentence "Patient is complaining of a stomachache" an emotion recognition task can be framed by adding a prompt like "Patient felt so ___", prompting the language model to fill in the blank with an emotion-laden word. Similarly, for translation tasks, a prompt like "English:
Patient is complaining of a stomachache, Chinese:
___" can be used. ChatGPT's ability to understand and answer questions in natural language can also be considered a form of prompting, influencing the quality of responses.
OpenPrompt Ding et al. (Ding et al., 2021)
introduced a unified, user-friendly toolkit called OpenPrompt to facilitate prompt-based learning with PLMs. OpenPrompt's modular and combinable research-friendly framework enables the integration of various tasks, prompting techniques, and PLMs while accommodating different template formats, verbalizer formats, and initialization strategies. Taylor et al. (Taylor et al., 2022) applied prompt learning to the clinical domain using frozen language models by using the OpenPrompt framework. Their research compared promptbased learning and fine-tuning in clinical classification tasks, finding that prompt learning typically matched traditional fine-tuning performance on full datasets and outperformed it in few-shot settings which means prompt learning is more adopted training with smaller datasets. Additionally, prompt learning excelled when working with frozen PLMs, showcasing its potential with fewer trainable parameters.
## A.4 Summary
In this section, we delve into prior work concerning temporal classification and examine the fundamental concepts and methods used in constructing our model. Given the absence of previous studies utilising prompt-based learning for temporal classification in the clinical domain, there are no established guidelines or approaches for this task.
In the following section, we will provide a detailed explanation of the methodology used to develop our model, outlining each step of the process.
## B On Dataset Used
Figure 8 presents the format used for training the model, where the discharge note column contains clinical text information, and the treatment entity column comprises treatment entities. The training dataset consists of 3,836 samples, with 3,075 having the label "ON" (treatment used during hospitalisation) and 762 having the label "OFF" (treatment not used during hospitalisation), resulting in an imbalanced distribution with label "ON" being four times more prevalent than label "OFF".
To gain a deeper understanding of the dataset, various statistical analyses were conducted. As depicted in Figure 9, the word count distribution for clinical notes, excluding the first five lines, is displayed. The first five lines of each note, which contain admission and discharge dates, are not considered beneficial for statistical analysis. The figure illustrates that most sentences have fewer than 20 words, and no sentences in the training dataset exceed 80 words. Based on this information, the maximum input sequence length can be determined.
## C Learning Models C.0.1 State-Of-The-Art Plms
A pre-trained language model is a neural network model that has already been trained on a large corpus of text data before being fine-tuned for specific tasks (Han et al., 2022). These models are designed to learn the structure and nuances of a language by predicting the next word in a sentence or reconstructing a sentence with masked words.
By learning the complex patterns and relationships
| document name | discharge note treatment entity label | | | |
|-------------------------------------------------------------------|-----------------------------------------|---------------------------------------------------|------------|----|
| o | 422.xml.tlink | Admission Date : 2017–07–12 Discharge Date : 2... | oxycodone | 1 |
| 631.xml.tlink ADMISSION DATE : 10/10/97 DISCHARGE DATE : 10/... | | | | |
| 1 | diabetes control | 1 | | |
| 2 | 272.xml.tlink | Admission Date : 2011–09–24 Discharge Date : 2... | extubated | 1 |
| Admission Date : 11/17/2003 Discharge Date : 1... | | | | |
| 3 | 96.xml.tlink | pain control | o | |
| Admission Date : 2017–07–12 Discharge Date : 2... | | | | |
| 4 | 422.xml.tlink | a standing IVF order | 1 | |
| Admission Date : 2017–07–12 Discharge Date : 2... | | | | |
| 3832 | 422.xml.tlink | repletion | 1 | |
| 3833 | 736.xml.tlink | Admission Date : 03/17/1998 Discharge Date : 0... | Gentamicin | 1 |
| Admission Date : 2009–06–23 Discharge Date : 2... | | | | |
| 3834 | 577.xml.tlink | levofloxacin | 1 | |
| 3835 | 177.xml.tlink | Admission Date : 2012–11–21 Discharge Date : 2... | CellCept | 1 |
| Admission Date : 12/11/2005 Discharge Date : 1... | | | | |
| 3836 | 26.xml.tlink | oral analgesics | 0 | |
Figure 8: Training Dataset Format within the language, these models can generate contextually relevant embeddings or representations of words and phrases.
Masked Language Model: BERT
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained language model developed by Google researchers in 2018 (Devlin et al., 2018). As its name suggests, it uses the encoder architecture from the Transformer model but with a deeper structure, as shown in Figure 14 . The BERT- base language model comprises 12 encoder blocks, which is twice the size of a standard Transformer Encoder.
In contrast to OpenAI's GPT (Generative Pretrained Transformer), BERT uses a bidirectional Transformer block connection layer (Figure 15), allowing it to access information from both preceding and following content, while GPT only considers the preceding content during training. Although the concept of "bi-directionality" is not new. For example, ELMo uses two individual objective functions P ( w i | w 1 , … w i − 1 ), P ( w i | w i + 1 , …, w n ) to train the language model. However, BERT uses a single objective function:
$$1,w_{i+1},...,w_{n})$$
## P(Wi|W1, ...Wi-1,Wi+1, ...,Wn)
(1)
to train the language model, integrating both precding and following context.
The Masked Language Model (MLM) serves as one of BERT's pre-training tasks, wherein it randomly masks certain words in a sentence with the
[mask] token. By leveraging the bidirectional Encoder Representations, BERT predicts the masked 1 1 1 o 1 1 1 1 1 0 words based on both preceding and following context, resulting in a more comprehensive understanding of word meanings. Additionally, the Next Sentence Prediction (NSP) pre-training task trains the model to discern the relationship between sentences by determining whether sentence B follows sentence A in the original text (Devlin et al., 2018).
The input for BERT consists of Token Embeddings, Segment Embeddings, and Position Embeddings, as illustrated in Figure 16. Each input sentence is treated as a sequence of tokens, with every sequence starting with a special classification token,
[CLS]. BERT uses another special token, [SEP],
to separate sentences and assigns segment embeddings to each token to indicate whether it belongs to sentence A or B. This enables BERT to handle various downstream tasks, such as separating question and answer sequences (Devlin et al., 2018).
By incorporating position embeddings, the model generates distinct word vector outputs for the same word based on its contextual environment, thereby enhancing the model's accuracy.
Fine-tuning enables BERT to accommodate various downstream tasks by adjusting the corresponding inputs and outputs (Figure 17 ). The same pretrained model parameters are used to initialise models for different downstream tasks, and all parameters are fine-tuned end-to-end to adapt the model to the specific task. In comparison to pre-training, fine-tuning is relatively cost-effective and computationally efficient.
Auto-regressive Language Model: GPT-2 The Generative Pre-trained Transformer 2 (GPT-2) is
![16_image_0.png](16_image_0.png)
an advanced language model introduced by OpenAI in 2019, building upon the foundation of the original GPT (Radford et al., 2019). GPT-2 uses a transformer-based decoder architecture with multilayer, multi-head self-attention mechanisms, as shown in Figure 18. This design allows GPT-2 to generate sequences of arbitrary length, making it particularly adept at producing highly coherent and contextually relevant text, often used for questionanswering and summarization tasks.
GPT-2 differs from BERT in several ways. As an autoregressive model, GPT-2 predicts one token at a time, using previously generated tokens as context for subsequent predictions based on the equation of p(ss−k, ..., sn|s1*, ..., s*n−k−1). This process continues until the desired output length is achieved or an end-of-sequence token is generated. By modelling a sequence of outputs as a product of conditional probabilities, GPT-2 leverages the natural sequence of symbols inherent in language. Unlike BERT's bidirectional approach, GPT-2 uses masked self-attention, processing input sequences in a unidirectional manner, resulting in more contextually relevant text generation (Radford et al.,
2018).
One innovative aspect of GPT-2 is its ability to perform supervised learning tasks using an unsupervised pre-training model. While traditional supervised learning aims to estimate p(output|*input*),
GPT-2 seeks to model p(output|*input, task*), allowing for a more generalised model across various tasks. This approach has been used in multitask and meta-learning settings. For instance, a translation training example could be presented as a sequence (translate to French, English text, French text), enabling the model to understand the translation task and the relationship between input and output (McCann et al., 2018).
Seq2Seq: T5 T5, an abbreviation for Text-ToText Transfer Transformer, proposes the idea that fine-tuning models for specific tasks may no longer be necessary (Raffel et al., 2020). Instead, a large pre-trained model can be used for any task, with the main focus on adapting the task into appropriate textual inputs and outputs (Raffel et al.,
2020). For example, refer to Figure 19, in translation tasks, inputting "translate English to German" followed by a [sequence] results in the model producing the translated [sequence]. Similarly, for summarization tasks, inputting "summarise" along with the [sequence] generates a summary of the [sequence]. This method establishes a unified Text-to-Text format for NLP tasks, expressed as [P ref ix + SequenceA] → [*SequenceB*], enabling the use of the same model, loss function, training process, and decoding process across all NLP tasks with different prefix information.
To accomplish this, a powerful language model that genuinely comprehends language is required.
The Google team developed a strategy to determine the optimal model architecture and parameters, ultimately creating a robust baseline. First, they examined three popular model architectures.
The encoder-decoder Transformer (Vaswani et al.,
2017), also known as a seq2seq model (left panel of Figure 20), comprises two layer stacks: the encoder processes the input sequence and encodes each token, while the decoder generates a new output sequence with each token based on the decoding input and previous output sequences. The language model architecture (middle one of Figure 20), akin to the decoder in an encoder-decoder Transformer, predicts output at each time-step based on previous time-step predictions, with GPT-2 being a typical Example The Prefix LM (language model) incorporates fully-visible masking applied to the prefix, rendering the architecture more effective for a wide range of text-to-text tasks shown in the left panel of Figure 20. Following experimentation, the Google team determined that the encoder-decoder architecture is the most suitable for the text-to-text framework, thus adopting it for T5 (Raffel et al.,
2020).
Subsequently, they used masked language modelling (BERT-style) as an unsupervised pre-training method. Similar to BERT, but using masks to replace spans surrounding the original masked tokens as corruption strategies, with a 15% corruption rate and 3 corrupted span length according to experimental results.
After utilising multi-task learning to train with the C4 (Colossal Clean Crawled Corpus) dataset, which comprises hundreds of gigabytes of clean English text extracted from the web, the Google team acquired the best pre-trained language model, T5, among numerous combinations of model architecture, training methods, and various parameters.
## C.1 Prompt-Based Learning
Prompt Construction The first step involves creating a prompting function f*prompt*(·), which transforms the input x into a prompted x′ = f*prompt*(x)
(Liu et al., 2023). This function entails two stages:
(1) Designing a *template*, a string containing an input slot [X] for the input x and an *answer slot* [Z] for the generated answer, which is mapped to the output y. (2) Filling the slot [X] with the input x.
In the case of temporal classification for treatment "a total abdominal hysterectomy," the template could be structured as "[Input] Here is the clinical record, treatment a total abdominal hysterectomy [Z] during the hospitalisation." Additionally, templates can be categorised based on the position of the empty slot, such as close (prompts with slots in the middle of the text) or prefix prompts (slots appearing before the entity) z (Liu et al., 2023).
Answer Selection Subsequently, the language model (LM) is used to identify the highestprobability text zˆ. Liu et al. (Liu et al., 2023)
characterises Z as a collection of acceptable values for z, indicating that the LM determines the most probable answer z from the set of answers Z. This process is also referred to as answer engineering or verbalisation (we will consistently use the terms verbalizer2and verbalization).
The verbalizer can be regarded as a mapping between one or many distinct tokens and unique class labels. The embedding generated at the <[MASK]> position by using PLM is through a large language model head or classifier, and prediction of the tokens from verbalizer class labeled are obtained. In the previous temporal classification example, Z = "is", "is not" corresponds to class labels Y = ON,
OFF.
The function ff ill(x′, z) fills the slot [Z] in prompt x' with a potential answer z. Lastly, the probability of the corresponding filled prompt is calculated using a PLM P(·; θ), as shown in Eq. 2:
zˆ = searchz∈ZP(f*f ill*(x
′, z); θ) (2)
The search function could use argmax for the highest-scoring output or sampling to randomly generate outputs according to the LM's probability distribution (Liu et al., 2023).
Answer Mapping The final step maps the highest-scoring answer zˆ to the highest-scored output yˆ. While this step might not be crucial in binary classification, it is necessary for tasks like translation or sentiment analysis with multiple words
(e.g., "good", "wonderful", "perfect") mapped to the same class (e.g., "++"). Thus, a mapping process between the answer and the true output value is required (Ding et al., 2021).
## D Parameters And Settings
The code below shows how to load the PLM of T5 and tokenizer in OpenPrompt: " plm, tokenizer, model_config, WrapperClass = load_plm ("t5", "t5base") "
2
## E More Discussion On Plm Outputs
![18_Image_0.Png](18_Image_0.Png)
The dataset we used is derived from clinical notes, implying that in real life, there are indeed more positive labels than negative ones. In some cases, having a high recall may be more important than having high precision. For instance, in medical diagnosis, it could be crucial to identify all patients with a specific disease (high recall) to ensure they receive appropriate treatment, even if some healthy patients are misclassified as having the disease (low precision). It is unclear whether recall is more important than precision in the context of temporal information of treatment. However, doctors can adjust the model's preference based on their specific situations.
It is not surprising that T5 outperforms the other models in the comparison. Firstly, T5 is the most recent model among the three and has been extensively tested by Raffel et al. (Raffel et al., 2020) to evaluate its advantages and disadvantages relative to the other architectures. Their results suggest that T5's encoder-decoder architecture performs better than BERT and GPT-2 in certain tasks. our experiment also demonstrates that T5 has a slight advantage over BERT and, more notably, GPT-2, which exhibit comparable performance.
Secondly, although it is not universally true that
"bigger models are better" in the NLP field, OpenAI has made significant strides in showcasing the effectiveness of larger models in recent years. The development of models such as GPT-2, GPT-3, and, more recently, Megatron-Turing, has demonstrated that models with more parameters can improve performance on a variety of natural language processing tasks, as illustrated in Figure 10. In our experiment, we used *bert-base-uncased*, which has 110M parameters, and the *gpt-2* model with 117M
parameters. However, *T5-base* model has 220M parameters, twice as many as *bert-base-uncased*.
Therefore, T5 is the best model for temporal classification in the clinical domain when compared to the other two models.
## F Pbl With Differed Input Text
One intuitive method to create prompts is to manually craft templates based on human understanding.
For instance, we can create a cloze-style manual template using Code 3, where the < [MASK] >
token appears in the middle of the template. According to the code example, the < [MASK] >
token can be filled with "is" or "is not".
Listing 3: Example of cloze manual template text = '<[clinical_record]> In this paragraph of the note,
<[treatment]> <[mask]> used between admission and discharge time.'
Another popular manual template approach is the question prompt shown in Figure 4, in which the < [MASK] > token is placed at the end. In this template, a discriminative statement or question is presented, such as "Question: this treatment was used between admission and discharge time. Is it correct?" Combined with the clinical context input, the PLM decides whether the statement is correct.
Therefore, the possible answers for < [MASK] >
can be "yes" or "no".
Listing 4: Example of manual template with question text = '<[clinical_record]> Question:
<[treatment]> were used between admission and discharge time. Is it correct? <[mask]>'
In the previous work, Gu et al. (2021) report a mixed template tokens and soft tokens in some yields better than manual and soft template, and Taylor et al. (Taylor et al., 2022) propose that soft template working with soft verbalizer perform the best on ICD9 Triage task in clinical domain.
During manual template engineering, some interesting findings were made. Initially, the manual template was designed as "<clinical note>. Question: <treatment> was used during hospitalisation.
Is it correct?". While this appeared sufficient, upon analysing errors in the testing data, a particular example revealed that the treatment in question was used during the patient's last hospitalisation but not the current one. Consequently, the template was modified to specify "between admission and discharge time", which better emphasised the temporal aspect.
Furthermore, certain errors were identified due to complex language logic. During this period, chatGPT was a popular topic in NLP domain, and the GPT-3.5 model demonstrated remarkable question-answering abilities. we input a template
(shown in Figure 11) to the chatGPT and the chatGPT model provided an incorrect response, despite giving an accurate explanation, which is not self coherent. This indicates that GPT-3.5 and the T5 model have difficulty capturing information from words such as "attempt" and "but".
By comparing the results of the cloze (Example 3) and question prompt (Example 4) in the manual template, it was found that the question prompt performed better. This suggests that the PLM may be more proficient in judging discriminative statements or providing answers after processing the entire input sentence. The (Mixed, Manual) pair also performed well, possibly because the generated soft tokens, based on the input sentence and fixed template tokens, provided guidance for the model to better select an answer from the set of possible responses.
## F.0.1 Different Input Text
Experiments of Different Input Text In this experiment, the input length for clinical records was modified by controlling the number of sentences in the input text using a sentence window size, as well as the number of sentences before and after the target sentence.
Discussion and Summary of Different Input Text The results displayed in Table.8 indicate that as the number of input sentences increases, both the F1 score and balanced accuracy improve. However, when the input text becomes too long, such as the entire clinical text, the performance slightly declines. It was found that a window size of 6, comprising 3 sentences before the target sentence, the target sentence itself, and 2 sentences after, yielded the best F1 score and balanced accuracy of 91.79 and 75.08, respectively.
## G Pbl Vs Traditional Fine-Tuning G.0.1 Summary Of Prompt-Based Learning Evaluation
In conclusion, the prompt-based learning paradigm experiments led to the establishment of a benchmark for the best-performing prompt model. The hyperparameter details are provided in Table.9. In the following section, this model will be compared to the traditional fine-tuning paradigm using a fewshot learning approach.
## G.1 Prompt Learning Versus Traditional Fine-Tuning
In this section, we present a benchmark comparison between Prompt-based Learning (PBL) and Traditional Fine-tuning (FT) under few-shot settings. Table 10 displays the selected hyperparameters for Fine-tuning. we chose to focus on a mixed template approach, which combines a manually designed template for the task with soft and trainable tokens. Since few-shot scenarios can introduce bias and variance that significantly affect performance, we aggregated the results from 10 trials and averaged them, providing a more accurate assessment.
The results (Table 7 and Figure 7) indicate that in the temporal classification task, the traditional fine-tuning model outperforms the prompt learning model. The prompt learning model performs better than the fine-tuning model only when the training set size is 10 in terms of F1 score, and when the dataset size is 20, the prompt learning model's balanced accuracy is slightly higher. This finding is consistent with Taylor's work (Taylor et al., 2022),
which showed that prompt learning did not outperform fine-tuning in various clinical domain classification tasks, such as ICD-9 50, ICD-9 Triage, and In-hospital mortality. However, in specific classification tasks under Frozen PLM conditions, prompt learning exhibited better performance. In this context, "frozen" refers to the absence of updates to the model's weights and parameters during the finetuning process.
These results were surprising, as prompt learning has been frequently reported to be more effective in few-shot settings in numerous publications.
There could be several reasons for this discrepancy.
First, the soft and trainable tokens in the mixed template were not trained using a separate optimizer, which may have resulted in suboptimal tokens for the given task. Second, the benchmark for prompt learning might not be accurate due to computa-
Figure 11: Example of error analysis with ChatGPT.("tap" is the treatment)
![20_image_0.png](20_image_0.png)
![20_image_1.png](20_image_1.png)
| Sentences window size | (sentences before, sentences after) | F1 score of ON class | B.Accy. |
|-------------------------|---------------------------------------|------------------------|-----------|
| 1 | (0,0) | 88.13 | 64.24 |
| 2 | (0,1) | 89.58 | 65.89 |
| 3 | (1,1) | 90.89 | 72.07 |
| 4 | (1,2) | 91.60 | 73.29 |
| 5 | (2,2) | 91.93 | 73.00 |
| 6 | (3,2) | 91.79 | 75.08 |
| 7 | (3,3) | 91.44 | 71.71 |
| Whole text | 84.86 | 63.95 | |
Table 8: Performance of Different Input Text (B.Accy.: Balanced Accuracy)
| Parameter | Value | | |
|----------------------------------------------------|-------------------|-----------|-------|
| PLM | T5 | | |
| learning rate | 4E-5 | | |
| batch size | 4 | | |
| epochs | 5 | | |
| optimizer | AdamW | | |
| template | mixed template | | |
| verbalizer | manual verbalizer | | |
| input sentences window size | 6 (3,2) | Parameter | Value |
| PLM | BERT | | |
| learning rate | 2E-5 | | |
| batch size | 4 | | |
| epochs | 5 | | |
| optimizer | AdamW | | |
| input sentences window size | 6 (3,2) | | |
| Table 10: Hyperparameter Selection for Fine-tuning | | | |
Table 9: Hyperparameter Selection for Prompt-based Learning tional resource and time limitations. For instance, the best PLM and learning rate were determined based on a manual template and manual verbalizer, but these selections may not be ideal for mixed and soft templates. Third, potential biases in the training process could have impacted the results, as no validation set was used for prompt learning, possibly preventing the selection of the best model during training. Furthermore, averaging the results of 10 trials might not provide a sufficiently accurate assessment, and more trials could be necessary. Fourth, in a few-shot learning scenario, useing a language model pre-trained on medication and clinical domain data might be more beneficial for clinical classification tasks. Finally, promptbased learning is a relatively new paradigm with much-untapped potential, whereas traditional finetuning has a well-developed training and tuning process.
Upon examining errors from the test dataset of prompt-based learning, specifically for both "ON"
"label": "OFF", "meta": "727.xml.tlink",
"clinical_record": "Admission Date: 2014-03-31 Discharge Date :
2014-04-01 . Dorctor notes: No maternal fever . No prolonged rupture of membranes . Clear amniotic fluid . Anesthesia by epidural . Vaginal delivery . Apgars were 8 and 9 . "
"treatment": "Anesthesia",
Figure 12: Example of an error in OFF class
"label": "ON",
"meta": "208.xml.tlink",
"clinical record": "Admission Date : 2018-05-26 Discharge Date :
2018-05-31 . Dorctor notes: WBC s since admission were as high as 14,000 but normalized . She also had 2 echocardiograms which revealed persistent pericardial effusions . She has been gently diuresed but has worsening ARF . Her 02 requirement has increased despite diuresis . She denies any CP / cough / fever , abdominal pain / diarrhea , black or bloody stools or headache . Her urine output decreased to nearly zero .
"treatment": "diuresis",
and "OFF" classes as shown in Figures 12 and 13 ,
it becomes evident that determining whether a treatment was administered during hospitalisation can be challenging. The input content often lacks sufficient temporal information to clearly indicate the treatment status. Furthermore, there are instances of ambiguity in the dataset annotations, which complicates the classification task. The sentence tense and specific temporal expressions might be the only cues for understanding the event timeline, even for human readers, without considering the broader context of the document. It is also worth noting that discharge summaries are typically prepared at the end of a patient's hospital stay, and as such, they do not describe the hospitalisation period as the present. These observations highlight the complexities involved in classifying temporal relationships in clinical texts and the need for further improvements in methods to effectively address such challenges.
## H Learning Structures
Figure 21 illustrates the general architecture of OpenPrompt, which allows for modifi cations to the PLM-related class (purple block) and the promptrelated class (blue block).
![21_image_0.png](21_image_0.png)
![22_image_0.png](22_image_0.png)
![22_image_1.png](22_image_1.png)
Input
![22_image_3.png](22_image_3.png)
![22_image_2.png](22_image_2.png)
![22_image_4.png](22_image_4.png)
![22_image_5.png](22_image_5.png)
![23_image_0.png](23_image_0.png)
![23_image_1.png](23_image_1.png)
coder
![23_image_3.png](23_image_3.png)
![23_image_2.png](23_image_2.png)
![23_image_4.png](23_image_4.png)
![23_image_5.png](23_image_5.png)
|
bonafilia-etal-2023-sudden | Sudden Semantic Shifts in {S}wedish {NATO} discourse | https://aclanthology.org/2023.acl-srw.28 | In this paper, we investigate a type of semantic shift that occurs when a sudden event radically changes public opinion on a topic. Looking at Sweden{'}s decision to apply for NATO membership in 2022, we use word embeddings to study how the associations users on Twitter have regarding NATO evolve. We identify several changes that we successfully validate against real-world events. However, the low engagement of the public with the issue often made it challenging to distinguish true signals from noise. We thus find that domain knowledge and data selection are of prime importance when using word embeddings to study semantic shifts. | # Sudden Semantic Shifts In Swedish Nato Discourse
Brian Bonafilia, Bastiaan Bruinsma, Denitsa Saynova, and Moa Johansson Chalmers University of Technology [email protected], {sebastianus.bruinsma, saynova, moa.johansson}@chalmers.se
## Abstract
In this paper, we investigate a type of semantic shift that occurs when a sudden event radically changes public opinion on a topic. Looking at Sweden's decision to apply for NATO
membership in 2022, we use word embeddings to study how the associations users on Twitter have regarding NATO evolve. We identify several changes that we successfully validate against real-world events. However, the low engagement of the public with the issue often made it challenging to distinguish true signals from noise. We thus find that domain knowledge and data selection are of prime importance when using word embeddings to study semantic shifts.
## 1 Introduction
A well-known adage in Natural Language Processing is that one knows a word by the company it keeps (Firth, 1957). Yet, this company does not need to be stable and can change in either the long or short term. When this happens, the word undergoes a *semantic shift*. One common way to study these semantic shifts is by using temporal –
or diachronic - word embeddings.
Most semantic shifts are slow and happen over many years or decades. Examples are words such as "nice", "broadcast" and "gay" which today have a different meaning than they would have had in the nineteenth century. Yet, while such shifts occur over various decennia, other shifts are more rapid. For example, the word "hero" changed its context from "veteran" and " superman" to "frontliner" and
"covidwarrior" during the COVID-19 pandemic in a matter of months (Guo et al., 2022).
The speed of semantic change depends on various factors, such as whether the word has more than one meaning or how common it is in use (Hamilton et al., 2016). Also, *sudden* semantic change can occur during high-impact events, such as abrupt political, social, or cultural changes. For example, Tahmasebi et al. (2012) notes that the meaning 184 of the word "terrorism" changed rapidly after the events of September 11, 2001. This, combined with the knowledge that a change in the meaning of a word also changes the opinions people associate with that word (Pérez and Tavits, 2023), makes understanding such sudden shifts relevant if we wish to understand people's changing opinions during real-world events.
Here, we use word embeddings to focus on an abrupt event in the case of Sweden: the country's decision to apply for NATO membership in 2022, following the Russian invasion of Ukraine. This decision was a sudden shift and a marked change in the country's stance on foreign affairs and defense.
To study this shift, we focus on the time from September 11, 2021, to September 11, 2022, the day of the 2022 Swedish general election. We chose this period as we wished to examine how the language used around NATO changed under the assumption that NATO would be a major election issue in Sweden. To measure the semantic shifts, we use the word embeddings from a Word2Vec
(Mikolov et al., 2013) model to estimate the semantic context of a set of words of interest. We then track these words over time to see if and how they changed by comparing the rank sorting of the most similar words between various periods.
From here on, this paper will proceed as follows. First, we will introduce the background to the Swedish application for NATO membership, and how it can serve as a marked and sudden change.
We then introduce our data and the procedure we used for pre-processing. Following this, we discuss our methods and the findings that result from them.
We end with some brief conclusions and several suggestions for further research.
## 2 Background
For over two hundred years, Sweden followed a self-proclaimed policy of non-alignment ("alliansfrihet") (Brommesson et al., 2022). As a result, it did not take part in most major wars, nor became part of any military alliance during the Cold War. And while it often participated in NATO exercises (Wieslander, 2022), full membership was rarely considered. Thus, Minister for Defense Peter Hultqvist could describe a Swedish membership of NATO as unthinkable as late as November 2021
(Bolin, 2023, p.307). After the invasion of Ukraine in February 2022 though, the government changed its position. This sudden change was possible due to the support of the opposition for membership and the disengagement of most citizens on the issue (Hinnfors, 2022). As a result, the government announced its plans to join NATO on April 13 and formally applied for NATO membership on May 16, 2022.
Within this timeframe, three events are of note.
First, there was the Turkish opposition to Swedish membership, rooted in that country's opposition to Sweden's support for Kurdish parties and activists
(Henley and Michaelson, 2022). Second, there was a "No Confidence" vote in the Swedish House of Representatives on the future of Minister for Justice Morgan Johansson. While he survived this vote thanks to the support of Kurdish-Iranian MP
Amineh Kakabaveh, in return the government had to affirm an earlier agreement made in 2021 that stated that "people from those [Kurdish] organizations coming to Sweden are not terrorists" - a line of reasoning that went straight against Turkish demands (Duxbury, 2022). Third, there was the NATO Summit that took place between 28 - 30 June, where all NATO members (Turkey included)
extended a formal invitation to both Finland and Sweden to join NATO.
A final point of note is that over this period, the application to NATO membership was what Berglez (2022) calls a "hidden issue". That is, both the government and opposition aimed to - and succeeded - in drawing attention away from it and were thereby followed by most of the media. An illustration of this is that the words "alliansfrihet" and "NATO" only occurred respectively 471 and 7936 times in the main Swedish media over the period of a year around the application. Moreover, the use of both words peaks around May, after which their number drops to almost zero until the elections in September.
## 3 Related Work
We base our decision to use global word embeddings to capture sudden semantic shifts on a wellfounded body of work. Not only are they able to capture the semantic similarity and alignment between words, but they are also able to track the shifts in the meaning of political concepts. For example, Guo et al. (2022) show that the meaning of medical words changed before and after the first outbreak of Covid-19, while Rodman (2020);
Rheault and Cochrane (2020) does the same for parliamentary data, and Durrheim et al. (2023) successfully use global embeddings to measure sociological concepts such as bias.
Of note is that all these papers opt to use *global* word embeddings instead of *contextual* word embeddings (e.g. ELMo (Peters et al., 2018), BERT
(Devlin et al., 2019)). While *global* word embeddings associate a single embedding vector with a word, *contextual* word embeddings assign a different vector for the same word depending on the sentence in which it appears. While this has the advantage of being able to take the context of the specific occurrence of a word into account, it does not provide a way to represent the position of a single word in the embedding space. That is, when we care about the global shift of words (as we do here),
we need a global and not a contextual embedding.
As such, most authors in the social sciences, and we here as well, opt to use global embeddings.
4 Data To measure our semantic shifts, we rely on Swedish-language Twitter posts ("tweets") that focus on NATO. We do so as Twitter's broad user base touches all segments of society, allowing us to get a complete picture of the debate around NATO.
Besides, as tweets have a limit of 280 characters, their length is very similar. This has the advantage that it improves data consistency while reducing computational complexity.
Within our year-long period, we collected 1, 188, 556 tweets, made by a total of 64, 315 users participating in 507, 359 conversations. Of these, 329, 336 are retweets, leaving 859, 220 original tweets. We collected a tweet if it contained any one of a set of search terms relating to NATO. To generate these terms, we drew on both theoretical expectations (deductive) as well as first results
(inductive). As such, we ended up with seventyfive unique search terms covering NATO, alliances, and the war in Ukraine (see Bonafilia (2023) for a complete list). Many of these words were either compound words that contain "nato" or relate to NATO and are specific enough to only occur in that context. Thus, we did not include general terms such as "allians" (alliance), unless they were part of the phrase "militär allians" (military alliance) or
"allians med turkiet" (alliance with Turkey). In the end, we included a tweet when: a) it contained any of the search terms, b) the tweet is a response to another tweet that contained a search term, or c)
the tweet has a response containing a search term.
Based on the background of the NATO issue as sketched above, we divide our tweets into four periods. First, there is the pre-invasion period, ranging from September 11, 2021, to 24 February 2022 (the date of the Russian military invasion of Ukraine). Second, there is the post-invasion period running from February 24 to April 13, the date of the joint press conference of the Swedish PM
Andersson and her Finnish colleague Marin, where both announced the possibility of their countries joining NATO. Third, there is the pre-application period, running between April 13 and the formal application on May 16. Finally, there is the postapplication period, running between May 16 and the elections on September 11. Table 1 shows the number of tweets for each of the periods.
| Tweets | Words | |
|------------------|---------|-------|
| Pre-Invasion | 131 889 | 2.3 M |
| Post-Invasion | 413 517 | 6.8 M |
| Pre-Application | 294 453 | 5.1 M |
| Post-Application | 346 948 | 5.4 M |
Table 1: Sizes of the Twitter dataset for each period.
To support our choice for these four periods, we look at the daily number of tweets we gathered
(see Figure 1). Here, we see that at the boundaries of the four periods (indicated by arrows 2, 3, and 5) there are clear peaks in the number of tweets. Besides, we find smaller peaks between January 15 - 19 (during the Russian military build-up near the Ukrainian border), on May 13 (the first Turkish signal of opposition to Sweden's entry into NATO), on June 7 (during the "No Confidence" vote against Morgan Johansson), and on June 28 (the NATO
summit in Madrid).
## 5 Pre-Processing
Given that the choice - and order of - preprocessing steps will influence our analysis, we discuss each of these steps in turn (Denny and Spirling, 2018). First, we remove any URLs and mentions to other users as well as some minor punctuation. Second, we split our tweets into individual tokens. For this, we use the NLTK library's nltk.TweetTokenizer, as it splits hashtags and emojis better than other tokenizers (Bird et al., 2009).
Third, we lowercase all tokens, create n-grams
(with no limit, so 3-grams can occur), and remove all remaining punctuation. Finally, we normalize the spelling of our tokens to address the various spellings of the same word (e.g. "grey" and "gray").
For a more detailed overview of the pre-processing see Bonafilia (2023).
We did not perform the common steps of removing stop words or lemmatizing the tokens, as we found that these steps weakened the relationship between related words. Singletons and low-frequency words were filtered out by the Gensim library (Re- ˇ
h˚uˇrek and Sojka, 2010), which was used for the analysis.
## 6 Method
The model we chose to find our word embeddings is *Word2Vec* (Mikolov et al., 2013). This is a singlelayer neural network that is trained to predict a word from its context - Continuous Bag-of-Words
(CBOW) - or context from a given word - Skipgram (SG). We opted to use both architectures given that they are different in the associations they capture, their computational efficiency, and their sensitivity to less-frequent words (Mikolov et al.,
2013).
## 6.1 Training Of The Model
As with all other embedding models, *Word2Vec* needs a large amount of text to be able to capture word associations. As the tweets from each period contained insufficient data to train a new model, we used Twitter data for each period to *fine-tune* an already trained model representing general Swedish.
This initial model was trained on Swedish media text (Göteborgs-Posten, SVT, and Wikipedia) from 2003 until 2014, made available by Språkbanken's Korp language resource (Borin et al., 2012). The total number of tokens in this corpus is 759 million, with about 1.04 million unique tokens which appear at least ten times. We chose the cut-off dates
![3_image_0.png](3_image_0.png)
of 2003 and 2014 to avoid biasing the model with inputs from after the Maidan uprisings in Ukraine in 2014. This control over the input period and model parameters was our main motivation to train and validate a new model rather than use a publicly available set of pre-trained vectors.
We then trained two base models - one for the Skip-gram and one for the Continuous Bag-ofWords architecture. For both, we used Negative Sampling, a window size of 5, a minimum number of word occurrences of 10, and 160 training iterations. To validate our base model, we used the word similarities and relatedness from SuperSim by Hengchen and Tahmasebi (2021) and a QVEC-CCA scoring as introduced by Tsvetkov et al. (2016) using a Swedish pack available from Språkbanken's Korp (Borin et al., 2013). In all cases, the results indicated that the base models were well trained (Bonafilia, 2023).
We then fine-tuned both the SG and CBOW architectures on the tweets made within each period, using our pre-trained models as a base. Because the Word2Vec model training is a stochastic process, and as we have to account for instability due to data variability, we trained 10 models for each case on a different uniform random sample of 90% of the text data from that period when we perform our bootstrapping. We then ranked the most similar words based on the average cosine similarity across all 10 models.
## 6.2 Analysis Approach
Once we have our model, we have to formalize a search method to decide which words we want to select to look at. While we are aware that we could use the embeddings themselves to find the most similar and most different words - we opt here for a *subjective* approach. The reason for this is that we know our topic of interest (NATO) and can draw on prior knowledge not included in the model.
For the core selection of words, we take those that have either a direct relation to NATO or are synonymous with it (e.g. "försvarsalliansen" (defense alliance)), have a link to states or persons involved in Sweden's application (e.g. "erdogan",
"putin", "finland"), have an association with the topics raised in the NATO discussion (e.g. "suveränitet" (sovereignty)), or words for which one subset of users in the polarization study had a markedly different use as indicated by word embeddings than another subset of users (e.g. "inkompetent" (incompetent), "dotters" (daughter's)). Besides this, we also draw on a study of words linked to polarized opinions on the issue of Sweden's entry into NATO (Bonafilia, 2023). In the end, this results in a list of 8000 words.
We then use these 8000 words and compare the averaged most similar words across the different time steps to find novel associations. While doing so, we ignore words that appeared in similar placements in all periods, such as synonyms or inflections of the word of interest. As not all the 8000 show interesting behavior, we then perform a second selection of words.
For refining the selection of words, we take all those words that fall under any one of the following criteria:
- Words which domain knowledge suggested are relevant.
- Words seen to be polarizing by Bonafilia
(2023).
- Words which markedly changed their most similar words from the pre-trained model or between periods as determined by RankBiased Overlap (RBO) (Webber et al., 2010)
of the sorted list of most similar words.
- Words for which unique words appeared among the most similar words in one of the periods but not among the most similar words in any other period.
After this second selection, we perform a last, manual review to look at general trends and to drop noisy findings. We did so as we wanted to drop those words which had very different embedding only because they were too infrequent to have a meaningful embedding at all.
## 7 Results
As both architectures lead to different results, we will look at both the results of the Continuous Bagof-Words (CBOW) and the Skip-gram (SG) in turn.
For each of the two, select four words that we deemed showed interesting patterns. These are
"natoansökan" (NATO application) and "försvaret"
(defense), as well as two unique ones for each –
"nato" (NATO) and "säkerhet" (security) for COBW
and "förskolor" (preschools) and "putin" (Putin)
for SG. For each word, we give the top four words associated with it based on their cosine similarity.
Besides these, we will also reflect on several other words that we found showed interesting behavior.
## 7.1 Continuous Bag-Of-Words
Table 2 shows the words with the highest cosine similarity for each of the four words for the CBOW
model. Also, in Figure 2, we show, for each of these four words, the comparison of the RankBiased Overlap between the list of the most similar words for each period and the list from the pre-trained CBOW model. Words such as "natoansökan", "nato" and "säkerhet" have a consistently low agreement in all periods, indicating a substantial shift from the base model. While "försvaret" drops to zero in the Pre-Application period as the agreement is lost completely, however, from Table 2 it is hard to determine the meaning of the shift, illustrating the difficulty in isolating the signal from noise and interpreting the results. In the pre-training data, "natoansökan" (NATO application) is so infrequently used that the word embeddings are meaningless. In the period leading up to the application, the subject of Sweden's NATO
application becomes topical enough that a hashtag
(\#natoansökan) starts to be used. Also, for the topic of "säkerhet" (security), we find that it becomes related to the concepts of "suveränitet" (sovereignty)
as the discussion of Sweden giving up neutrality to join a defensive alliance takes shape.
The word "nato" itself, becomes closely associated with the word "sverige" (Sweden), as both have a higher frequency (11×10−3) and (6×10−3)
when compared with the pre-trained data (1×10−5 and 8 × 10−4respectively). Leading to the word
"nato" having a more meaningful word embedding in the base model. The reason for this is that "nato",
being one of the search words, is so frequent in our data, that it has a high association with all other words. This makes the embedding relatively uninteresting to look at, as the embedding of the word is more related to other words of high frequency
- such as "sverige" (Sweden) and "vi" (we) - than with words of similar meaning. This underscores the limitation of using word embeddings to find meaningful shifts for words that are deliberately sought out to generate the dataset.
## 7.2 Skip-Gram
Table 3 shows the words with the highest cosine similarity for the Skip-gram architecture and Figure 3 shows the RBO results. Here, it can be seen that during the period after the Russian invasion of Ukraine and before the application, there is an association between "natoansökan" (NATO application)
and "destabiliserande" (destabilizing). References to destabilization appeared almost exclusively during this period. This also fits well with the political consensus at the time, i.e. that a Swedish application to NATO would destabilize the country by jeopardizing its relationship with Russia. After the press conference on April 13, this changed and an association with "eventuell" (possible) and other words relating to the (likeliness of the) process of
| natoansökan | försvaret | nato | säkerhet | |
|------------------------------------------------------------------------------------------------------------------|--------------------|----------------|-------------------|------------------|
| Base | sverigesregering | försvarsmakten | försvarsalliansen | rättssäkerhet |
| regeringsbildandet | flygvapnet | fn | trovärdighet | |
| Pre-Invasion | osansökan | försvarsmakten | sverige | säkerhetspolitik |
| emuomröstning | underhållet | ukraina | natoansökning | |
| natooption | rättsväsendet | usa | konkurrenskraft | |
| intresseanmälan | välfärdssystemet | vi | stabilitet | |
| Post-Invasion | medlemskapsansökan | försvarsmakten | sverige | säkerhetspolitik |
| natoanslutning | totalförsvaret | vi | natoansökning | |
| dispensansökan | försvarsanslaget | ukraina | suveränitet | |
| osansökan | försvarsförmågan | finland | frihet | |
| Pre-Application | #natoansökan | underhållet | sverige | suveränitet |
| natomedlamskap | försvarsförmågan | #nato | rättssäkerhet | |
| ansökningsprocess | bnp | finland | försvarskapacitet | |
| medlemskapsansökan | insatsförsvaret | vi | säkerhetspolitik | |
| Post-Application | natoanslutningen | luftsförsvaret | sverige | säkerhetspolitik |
| natoprocess(en) | totalförsvaret | finland | överlevnad | |
| natomedlemskap | välfärdssystemet | turkiet | oljeförsörjning | |
| natoansökningen | insatsförsvaret | #nato | suveränitet | |
| Table 2: Words with top cosine similarity in Continuous Bag-of-Words models grouped by period, for "natoansökan" | | | | |
Table 2: Words with top cosine similarity in Continuous Bag-of-Words models grouped by period, for "natoansökan"
![5_image_0.png](5_image_0.png)
(NATO application), "försvaret" (defense), "nato" (NATO), and "säkerhet" (security)
application, began to appear. We can see a similar change for "försvaret" (defense) from where the association shifts from words relating to maintenance and juridical matters before the application to a connection to the spending goal of 2% of GDP
(the words "2%" and "bnp") for NATO members afterward.
Furthermore, we see a neutral word such as
"förskolor" (preschool) has a strong cosine similarity to "kärnvapen" (nuclear weapons) in the period leading up to the application. While seemingly contradictory, the reason behind this is that during this time, Left Party leader Nooshi Dadgostar made a public statement regarding not wanting NATO's nuclear weapons to be housed within Sweden, alluding to a possibility of nuclear weapon silos near her daughter's preschool. This generated conversation among Twitter users discussing the pros and cons of the NATO application, resulting in the SG model finding the similarity in the contexts in which these words appeared in. Also, we see the emergence of novel words related to Vladimir Putin. For ex-
| natoansökan | försvaret | förskolor | putin | |
|----------------------------------------------------------------------------------------------------------|---------------------|--------------------|----------------|---------------|
| Base | sverigesregering | försvarsmakten | skolor | vladimirputin |
| ratificera | flygvapnet | äldreboenden | medvedev | |
| Pre-Invasion | medlemsansökan | försvarsmakten | gymnasieskolor | ryssland |
| byggförhandlingarna | invasionförsvaret | äldreboenden | biden | |
| omvärldsutveckling | förbandsverksamhet | fritidshem | nato | |
| drömregering | fm | vårdcentraler | xi | |
| Post-Invasion | medlemsansökan | försvarsmakten | polisstationer | ryssland |
| natomedlemskap | bnp | äldreboenden | han | |
| destabilisera(nde) | rusta | gymnasieskolor | ukraina | |
| natoanslutning | anslagen | fritidshem | nato | |
| Pre-Application | natoanslutning | bnp | dotters | ryssland |
| eventuell | 2% | dagis | putler | |
| natomedlemskap | rusta | kärnvapen | erdogan | |
| svensk | försvarskostnaderna | kärnvapenbaser | ryssen | |
| Post-Application | sveriges | bnp | skolbibliotek | erdogan |
| finlands | 2% | förskoleverksamhet | ryssland | |
| natoprocessen | försvarsanslaget | fritidshem | biden | |
| inlämnad | materielanskaffning | gymnasieskolor | putler | |
| Table 3: Words with top cosine similarity in Skip-gram models grouped by period, for "natoansökan" (NATO | | | | |
Table 3: Words with top cosine similarity in Skip-gram models grouped by period, for "natoansökan" (NATO
![6_image_0.png](6_image_0.png)
application), "försvaret" (defense), "förskolor" (preschools) and "putin" (Putin)
ample, the word "putler" is meant to draw a connection between Russia's invasion of Ukraine and the aggression of Nazi Germany during the Second World War. Finally, when looking at the RBO
results, in contrast to CBOW, SG shows a larger average shift from the baseline model for all periods. This results in the approach yielding less clear results and the need for more noise words to be filtered to find useful examples, making it harder to detect a true signal. For example, even when
"förskolor" becomes a relevant word, the dip in the rank order similarity is small since the similarity was low across the board.
## 7.3 Further Examples
Other words (not shown here), also exhibit a strong relationship with certain events during the period.
Thus, the word "inkompetent" (incompetent) first had associations with words like "korrumperad"
(corrupted) and "felprioriteringar" (misplaced priorities), but later switched those to words such as "minister" (Minister), and "morganjohansson"
(Morgan Johansson) at the time of the vote of noconfidence against Minister for Justice Morgan Johansson. Besides, the word "natomotståndare"
(NATO opponent), while first being associated with the Left Party (a traditional opponent of Swedish NATO membership), became associated with the Green Party and individual Social Democrats (such as former Minister for Defense Peter Hultqvist) instead. Finally, as expected, we observe that the word "kiev" is first associated with other cities, such as Tbilisi, while Post-Invasion it gains an association with the Ukrainian "kyiv" spelling, presumably by Twitter users who wished to express solidarity with Ukraine. Finally, while the word
"azov" in the pre-training data referred to the Sea of Azov or any of a number of Ukrainian and Russian locations, the most similar words were other places in the area. Later, during the Post-Invasion period, this changed. First, the use of "azov" centered around the alleged neo-Nazi ties of the Azov Battalion, a Ukrainian militia, and then later became associated with the Siege of Mariupol, where defenders had occupied the "Azovstal" Steel Plant.
## 8 Conclusion
Our aim with this study was to look at the sudden semantic shift that we expected to occur when Sweden decided to apply for NATO membership in 2022. Looking at various words related to this application process, we find that word embeddings are a powerful tool to capture some of those shifts. Moreover, when validating them against real-world events, we find that those shifts are both accurate and meaningful. Yet, the sparsity of the dataset often makes it difficult to separate signal from noise when looking at the model results alone.
The misalignment between the signals that each of the two model architectures - SG and CBOW
- manage to capture, as well as the difficulty of validating and interpreting the results exemplifies the challenges in using word embeddings for automatically detecting and measuring semantic shifts.
Thus, there is a need for extensive human interpretation and validation based on domain knowledge together with a broad range of statistics that can reveal different aspects of the patterns captured by the models. Despite this though, word embeddings are still a powerful method that can aid the discovery process. As we showed, they are efficient enough to process large amounts of data and capture several underlying word relationships and
## Sudden Semantic Shifts. 9 Suggestions For Further Research
We see two suggestions for further research, two methodological and one practical. On the methodological side, we saw that selecting Tweets by their relationship to NATO resulted in a skewed frequency of NATO-related words when compared with those in the pre-trained model. Such a sparse dataset with non-representative word distributions makes the study of the search words hard. To allay this, one could extend the criteria to capture a broader and more diverse representation of the language used during the period.
Another methodological option is the consideration of a different model. Two alternatives to the model we used here are FastText (Joulin et al.,
2017) and GloVe (Pennington et al., 2014). Both offer a different perspective on word embeddings and might address some of the issues we faced here.
From the practical side, we assumed that Swedish NATO membership would be a major electoral issue and that a single year was enough to capture this debate. Both proved to be wrong.
NATO membership was rarely discussed in the period leading up to the elections, and at the time of writing, Sweden's NATO aspirations are still unfulfilled. Thus, further research could extend the data collection period to gain a better view of any shifts in the word embeddings.
## Acknowledgments
This work was supported by the Wallenberg AI,
Autonomous Systems and Software Program - Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.
## References
Peter Berglez. 2022. Hur påverkades valrörelsen 2022 av omvärlden? In Niklas Bolin, Kajsa Falasca, Marie Grusell, and Lars Nord, editors, *Snabbtänkt 2.0 22*,
page 111. Mittuniversitetet, Demicom, Sundsvall.
Steven Bird, Edward Loper, and Ewan Klein. 2009.
Natural Language Processing with Python. O'Reilly Media, Sebastopol, CA.
Niklas Bolin. 2023. The Repercussions of the Russian Invasion of Ukraine on the Populist Radical Right in Sweden. In Gilles Ivaldi and Emilia Zankina, editors, *The Impacts of the Russian Invasion of Ukraine*
on Right-Wing Populism in Europe, pages 302–313.
European Center for Populism Studies (ECPS), Brussels.
Brian Bonafilia. 2023. Methods for Detecting Echo Chambers in Social Media Networks. Master's thesis, Chalmers University of Technology.
Lars Borin, Markus Forsberg, and Lennart Lönngren.
2013. SALDO: A Touch of Yin to WordNet's Yang.
Language Resources and Evaluation, 47(4):1191–
1211.
Lars Borin, Markus Forsberg, and Johan Roxendal.
2012. Korp - the corpus infrastructure of Språkbanken. In Proceedings of the Eight International Conference on Language Resources and Evaluation
(LREC'12), Paris. European Language Resources Association (ELRA).
Douglas Brommesson, Ann-Marie Ekengren, and Anna Michalski. 2022. Sweden's Policy of Neutrality: Success Through Flexibility? In Caroline de la Porte, Guðný Björk Eydal, Jaakko Kauko, Daniel Nohrstedt, Paul 't Hart, and Bent Sofus Tranøy, editors, Successful Public Policy in the Nordic Countries:
Cases, Lessons, Challenges, pages 284–305. Oxford University Press, Oxford.
Matthew J. Denny and Arthur Spirling. 2018. Text Preprocessing For Unsupervised Learning: Why It Matters, When It Misleads, And What To Do About It. *Political Analysis*, 26(2):168–189.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, MN. Association for Computational Linguistics.
Kevin Durrheim, Maria Schuld, Martin Mafunda, and Sindisiwe Mazibuko. 2023. Using Word Embeddings to Investigate Cultural Biases. British Journal of Social Psychology, 62(1):617–629.
Charles Duxbury. 2022. Swedish Government Narrowly Survives No-Confidence Vote. Politico (EU), 06-072022.
J. R. Firth. 1957. Applications of General Linguistics.
Transactions of the Philological Society, 56(1):1–14.
Yanzhu Guo, Christos Xypolopoulos, and Michalis Vazirgiannis. 2022. How COVID-19 is Changing Our Language: Detecting Semantic Shift in Twitter Word Embeddings. In *Conférence Nationale en Intelligence Artificielle 2022 (CNIA 2022)*, Saint-Etienne, France.
William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change. In The 54th
Annual Meeting of the Association for Computational Linguistics - Proceedings of the Conference, Vol. 1
(Long Papers), pages 1489–1501, Stroudsburg, PA.
Association for Computational Linguistics.
Simon Hengchen and Nina Tahmasebi. 2021. SuperSim: A Test Set for Word Similarity and Relatedness in Swedish. In *Proceedings of the 23rd Nordic* Conference on Computational Linguistics (NoDaLiDa), pages 268–275, Linköping. Linköping University Electronic Press.
Jon Henley and Ruth Michaelson. 2022. Erdogan: ˘
Turkey 'not positive' about Sweden and Finland joining Nato. The Guardian, 13-05-2022.
Jonas Hinnfors. 2022. Socialdemokraterna: högervridning och hot utifrån. In Niklas Bolin, Kajsa Falasca, Marie Grusell, and Lars Nord, editors, Snabbtänkt 2.0 22, page 39. Mittuniversitetet, Demicom, Sundsvall.
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431. Association for Computational Linguistics.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space.
Jeffrey Pennington, Richard Socher, and Christopher D.
Manning. 2014. GloVe: Global Vectors for Word Representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Stroudsburg, PA. Association for Computational Linguistics.
Efrén Pérez and Margit Tavits. 2023. *Voicing Politics:*
How Language Shapes Public Opinion. Princeton University Press, Princeton, NJ.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, Stroudsburg, PA. Association for Computational Linguistics.
Radim Reh˚u ˇ ˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta. ELRA.
Ludovic Rheault and Christopher Cochrane. 2020.
Word Embeddings for the Analysis of Ideological Placement in Parliamentary Corpora. *Political Analysis*, 28(1):112–133.
Emma Rodman. 2020. A Timely Intervention: Tracking the Changing Meanings of Political Concepts with Word Vectors. *Political Analysis*, 28(1):87–111.
Nina Tahmasebi, Gerhard Gossen, Nattiya Kanhabua, Helge Holzmann, and Thomas Risse. 2012. NEER:
An Unsupervised Method for Named Entity Evolution Recognition. In *Proceedings of COLING 2012:*
Technical Papers, pages 2553–2568, Mumbai. The COLING 2012 Organizing Committee.
Yulia Tsvetkov, Manaal Faruqui, and Chris Dyer. 2016.
Correlation-based Intrinsic Evaluation of Word Vector Representations. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 111–115, Stroudsberg, PA. Association for Computational Linguistics.
William Webber, Alistair Moffat, and Justin Zobel. 2010.
A Similarity Measure for Indefinite Rankings. ACM
Transactions on Information Systems, 28(4):1–38.
Anna Wieslander. 2022. "The Hultqvist doctrine"
- Swedish Security and Defence Policy after the Russian Annexation of Crimea. *Defence Studies*,
22(1):35–59. |
sugiura-etal-2023-building | Building a Buzzer-quiz Answering System | https://aclanthology.org/2023.acl-srw.29 | A buzzer quiz is a genre of quiz in which multiple players simultaneously listen to a quiz being read aloud and respond it by buzzing in as soon as they can predict the answer. Because incorrect answers often result in penalties, a buzzer-quiz answering system must not only predict the answer from only part of a question but also estimate the predicted answer{'}s accuracy. In this paper, we introduce two types of buzzer-quiz answering systems: (1) a system that directly generates an answer from part of a question by using an autoregressive language model; and (2) a system that first reconstructs the entire question by using an autoregressive language model and then determines the answer according to the reconstructed question. We then propose a method to estimate the accuracy of the answers for each system by using the internal scores of each model. | # Building A Buzzer-Quiz Answering System
Naoya Sugiura Kosuke Yamada Ryohei Sasano Koichi Takeda Katsuhiko Toyama Graduate School of Informatics, Nagoya University, Japan
{sugiura.naoya.e7,yamada.kosuke.v1}@s.mail.nagoya-u.ac.jp
{sasano,takedasu,toyama}@i.nagoya-u.ac.jp
## Abstract
A buzzer quiz is a genre of quiz in which multiple players simultaneously listen to a quiz being read aloud and respond it by buzzing in as soon as they can predict the answer. Because incorrect answers often result in penalties, a buzzer-quiz answering system must not only predict the answer from only part of a question but also estimate the predicted answer's accuracy. In this paper, we introduce two types of buzzer-quiz answering systems: (1) a system that directly generates an answer from part of a question by using an autoregressive language model; and (2) a system that first reconstructs the entire question by using an autoregressive language model and then determines the answer according to the reconstructed question.
We then propose a method to estimate the accuracy of the answers for each system by using the internal scores of each model.
## 1 Introduction
We use the term "buzzer quiz" to refer to a genre of quiz in which questioner reads quiz questions aloud and players answer by buzzing in as soon as they can predict the answer. A well-known example of a similar format to what we call a buzzer quiz here is the U.S. TV program *Jeopady!*, in which contestants must buzz in with a lock-out device before trying to answer a question. However, in Jeopady!, answers are only allowed after all the questions have been read aloud, whereas we assume a format in which answers are allowed while the questions are being read out. Because of the importance of buzzing in quickly, players normally answer incomplete questions in buzzer quiz.
Quizzes have been studied as open-domain question answering (QA) tasks because they do not limit the scope of knowledge. However, the major datasets for open-domain QA tasks, like Natural Questions (Kwiatkowski et al., 2019) and TriviaQA
(Joshi et al., 2017) cointain complete questions.
Consequently, systems built using those datasets Q (75% completeness): Pete Rose and this player are tied with ten 200-hit seasons each. This Japanese outfielder played most of his career with the Mariners, and currently plays for the Marlins.
Confidence score: 0.991 A: Ichiro Suzuki *correct* Q (25% completeness): Pete Rose and this player are tied with ten 200-hit seasons each. This Japanese outfielder played most of his career with the Mariners, and currently plays for the Marlins.
Confidence score: 0.125 A: Ty Cobb *incorrect* Table 1: Examples of quiz question text and output of answering system. Gray texts indicate the unread portions of the question text. "Completeness" denotes the percentage of the question text that has been read, and the "confidence score" refers to a value indicating the likelihood of the predicted answer being correct.
(Karpukhin et al., 2020; Yamada et al., 2021; Izacard and Grave, 2021) are not designed to answer incomplete questions. Furthermore, it is certainly crucial in buzzer quizzes to give correct answers, but it is also essential to consider the plausibility of a predicted answer based on the given question at that moment and to decide whether to actually respond. For example, consider the question listed in Table 1 if it has not been read past the phrase "200hit." At that point, because other baseball players also hold records comparable to that of Pete Rose, it is difficult to narrow the answer down to a single candidate. This makes the predicted answer at that moment more likely to be incorrect, so it would be better not to answer at that point. On the other hand, once the question has been read further, the predicted answer converges to the correct answer, "Ichiro Suzuki." Hence, to construct a more effective buzzer-quiz answering system, we need an indicator of a predicted answer's likelihood of being correct, which call a "confidence score."
We believe that the capability to respond to buzzer quizzes by answering incomplete questions could help replicate the human capacity to smoothly generate responses in a conversation by 194 sequentially predicting the content of the dialogue.
In this study, we first constructed a buzzer-quiz answering system that produces appropriate answers for incomplete questions, and we propose the methods for calculating the confidence scores for two different models. Specifically, we constructed two systems: the **GPT-only** system, which directly generates answers in response to a question by using GPT (Radford et al., 2018); and the **GPT+DPR** system, which generates answers through a retrieverreader approach using Dense Passage Retrieval
(DPR) (Karpukhin et al., 2020), after completing the question via GPT. For the former system, we calculate a confidence score by using token output probabilities during answer generation, while for the latter system, we use scores that are used in the output of the model.
## 2 Proposed Method
We propose two types of buzzer-quiz answering systems based on open-domain QA systems. We also propose methods to estimate the accuracy of the answers in each system by using the internal scores in each model.
## 2.1 Open-Domain Qa System
In open-domain QA, there are two mainstream approaches. The first is a generation-based approach that generates answers directly in response to input questions. A representative model is GPT (Radford et al., 2018), which is a pre-trained language model that is based on the Transformer decoder
(Vaswani et al., 2017) and is trained to predict word sequences from a context by using a large text corpus. Because of this property, GPT can be used in language generation tasks that involve generating text in response to input text. In the case of QA, GPT can generate answers by formatting the input in such a way as to infer only the answer to a question. Furthermore, because GPT
often achieves higher performance through finetuning with datasets from downstream tasks, such fine-tuning can be applied to build QA models.
The second major open-domain QA approach is a retriever-reader approach that searches for documents related to a question and extracts the answer from the documents. A representative model is the retriever-reader model, which uses DPR as the retriever. DPR uses a dual encoder network with different BERT models (Devlin et al., 2019) for questions and documents. When sentences are input to BERT, a special token [CLS] is inserted at the beginning of a document, and the embedding representations for the question text and each document are obtained. Then, documents are selected according to the semantic similarity calculated as the inner product of the obtained representations
(Karpukhin et al., 2020). In the reader, BERT predicts the relevant documents containing the correct answer and extracts the answer portion within a document. Specifically, it predicts the document that is most likely to contain the answer at the position of the token [CLS]. Then, it performs the answer-portion extraction from the predicted document and determines the start and end points of the token sequence that forms the answer.
## 2.2 Buzzer-Quiz Answering Systems
The effectiveness of the open-domain QA systems that answer complete questions has been confirmed, but their effectiveness for a buzzer-quiz answering system remains unclear because such a system requires to answer incomplete questions. Generally, when only part of a question is given, the nature of the problem differs significantly from the case of a complete question, because there may be multiple possible answers, or the necessary information to determine the answer might not be available yet.
In this study, we constructed two buzzer-quiz answering systems: one that relies solely on inference via GPT, called the GPT-only system, and another that uses GPT for question completion and applies the retriever-reader approach with DPR, called the GPT+DPR system. For the GPT-only system, the designed input format is "[question text] + '/the answer is'," which prompts the model to generate the answer within the single quotation marks, which is then used as the predicted answer. The purpose of inserting a slash '/' between the question text and "the answer is" is to make the model recognize the boundary of the question text, which prevents the completion of incomplete questions. For the GPT+DPR system, an incomplete question is input to the GPT to complete the question text, and the resulting complete question is then used as input for the DPR-based retriever-reader model to generate the answer.
## 2.3 Confidence Scores
Next, we propose to calculate the confidence scores for predicted answers by using the internal scores that each model uses when it generates the outputs for the buzzer-quiz answering system. Here, the confidence score means an indicator for judging whether a predicted answer is correct. For higher values of our proposed confidence scores, we expect a higher percentage of correct answers.
For the GPT-only model, we use the generation probability of the first token in the predicted answer (referred to as the **generation score**) as the confidence score. When given a sentence's first n tokens during sentence completion, GPT outputs the (n + 1)-th token from the vocabulary with the highest generation score. The first token largely determines the direction of the answer in the buzzer quiz, because the answer often comprises a small number of tokens. Hence, we adopt only the first token's generation score as the confidence score.
As for the GPT+DPR model, three internal scores can be used as confidence scores: the **document score** and the **extraction score** calculated by the reader, as well as their arithmetic mean, the average score. In the reader, each [CLS] token in a document is scored through a learned linear layer, and the document with the highest score is selected; this is the document score. Then, the model extracts the span containing the answer from the selected document by calculating a span score, which comprises a start score and an end score. The extraction score is the sum of these start and end scores.
## 3 Experiments
We conducted two experiments: an evaluation of the proposed buzzer-quiz answering system's accuracy, and an investigation of the effectiveness of the confidence scores for each model. We define question completeness as x% when a question is truncated after the first x% of the text in terms of the character count. For the accuracy verification, we applied the GPT-only and the GPT+DPR models to questions with completeness levels of 25%,
50%, 75%, and 100%. For investigation of the confidence scores' effectiveness, we evaluated the confidence scores for each model by examining the relationship between the confidence scores and the accuracy at each level of question completeness.
## 3.1 Settings
Datasets We used the 2nd AIO Official Dataset
(AIO),1 which contains past questions from Japanese quiz competitions. The AIO dataset is 1https://sites.google.com/view/
project-aio/dataset
![2_image_0.png](2_image_0.png)
officially divided into a training set, a development set, and a test set. In addition, we collected past questions from the Japanese quiz application
"Minna de Hayaoshi Quiz" (Minhaya)2as additional training data. Table 2 shows the number of quiz-answer pairs and the average number of characters in the questions for the datasets. Note that the training of DPR required positive and negative documents in addition to quiz-answer pairs.
Accordingly, DPR was trained using only the AIO
dataset, whereas the Minhaya dataset was used only for training GPT.
Comparison Models We compared both models, GPT-only and GPT+DPR, in the accuracy verification. In the investigation of confidence score effectiveness, for GPT-only, we used the generation score; in contrast, for GPT+DPR, we used all three scores, i.e., the document score, extraction score, and average score.
We used the Japanese GPT model3 on Hugging Face Hub (Wolf et al., 2020) and a DPR
model4 based on Japanese BERT-large,5 which is pre-trained the Japanese Wikipedia corpus. For GPT-only, we fine-tuned the model on the training set with the input format "[question text] + '/ the answer is' [answer]." For GPT+DPR, GPT was fine-tuned using only the questions from the training set. In both cases, the training was conducted for 5 epochs. DPR was based on Japanese BERTlarge for both the retriever and reader components.
The retriever was trained for 5 epochs with a batch size of 128 and a learning rate of 1e-5, and the reader was trained for 3 epochs with a batch size of 8 and a learning rate of 2e-5.
Metrics In the accuracy verification, the correctness of the predicted answer was assessed in terms
![3_image_0.png](3_image_0.png)
of exact matching. In the investigation of confidence score effectiveness, we created curves of the correct answer rate with respect to the answer generation rate, and we evaluated the effectiveness in terms of the area under the curve (AUC). Here, the answer generation rate was the proportion of times that the system actually provided an answer. If the models only answered questions for which the confidence score exceeded a threshold α, we can control the answer rate by changing α. On the other hand, the correct answer rate was the proportion of correct answers among the answers output by the models. If α is set to a value below 0, the answer rate will coincide with the overall correct answer rate of the system. As α increases, only questions with high confidence scores will be answered, so the correct answer rate will be expected to increase.
## 3.2 Accuracy Verification
Table 3 lists the accuracies for the GPT-only and GPT+DPR models for each level of question completeness. As the question completeness decreased, the correct answer rate also decreased, but the rate of decrease was not proportional. From 100% to 75%, the decline was relatively gentle. This was likely because many important words that determine the answer appear in the first half of a question, whereas cases with information-rich words appearing in the latter half of a question are relatively rare. Comparing the scores of the two models, we see that GPT+DPR performed better when the question completeness was 100%. When the questions were incomplete, however, there was no significant difference in performance between the two models was observed.
## 3.3 Confidence Score Effectiveness
Table 4 lists the AUC values for each level of question completeness. Among the three confidence scores for GPT+DPR, using the document score yielded the highest AUC. Furthermore, among all the results, the generation score for GPT-only achieved the highest AUC.
Next, because the document score had the highest AUC for GPT+DPR, we used it to compare
![3_image_1.png](3_image_1.png)
Table 4: AUC values for each level of question completeness. "Score" means the internal scores we used.
![3_image_2.png](3_image_2.png)
![3_image_3.png](3_image_3.png)
the correct answer rate vs. answer generation rate curves of the GPT-only model and the GPT+DPR models. Figure 1 shows the results. For all settings, we can observe that the accuracy was increased by limiting the questions to be answered to only those with high confidence scores, thus confirming the effectiveness of the confidence scores. Comparing GPT-only and GPT+DPR, as listed in Table 3, the accuracy at an answer rate of 1.0 was higher for GPT+DPR when the question completeness was 100%, and equivalent in for less-complete questions. When the answer rate was less than 0.8, however, GPT-only had higher accuracy in all cases.
This difference was more obvious when both the question completeness and the answer rate were low. For example, in the case of 25% question completeness and an answer rate of 0.1, the accuracy of GPT+DPR is around 0.5, whereas that of GPT-only was around 0.8, thus showing a significant difference. Accordingly, we can conclude that the GPT-only model is more suitable for buzzer quizzes.
Table 5 shows examples of quiz question text and output from the GPT-only system. Examples
(a) and (b) are cases with 25% question completeness, while Examples (c) and (d) are cases with 75% question completeness. In Examples (a) and
(a)
(c)
(d)
Examples Q (25% completeness):
ごはんの上にハンバーグと目玉焼きを乗せ、グレービーソースをかけたハワイの名物料理は何でしょう?
(This is a rice dish topped with a hamburger steak and a fried egg, which is covered with gravy sauce and originated in Hawaii. What is this?)
Confidence score: 0.996 A: ロコモコ (loco moco) *correct*
(b)
Q (25% completeness):
オーストリアの首都はウィーンですが、オーストラリアの首都はどこでしょう?
(The capital of Austria is Vienna, but what is the capital of Australia?)
Confidence score: 0.982 A: キャンベラ (Canberra) *incorrect* Q (75% completeness):
約5年の歳月をかけてシスティーナ礼拝堂の祭壇に描かれた、ミケランジェロの代表作である絵画は何で しょう?
(This painting was created over the span of about five years in the Sistine Chapel. Now, this is known as one of Michelangelo's masterpieces. What is this?)
Confidence score: 0.991 A: 最後の審判 (The Last Judgment) *correct* Q (75% completeness):
![4_image_0.png](4_image_0.png)
1985年に発売され、全世界で 4000万本以上を売り上げたという任天堂ファミリーコンピュータのゲーム で、「スーマリ」などと略されるものは何?
(This game was launched for the Nintendo Family Computer in 1985 and has sold 40 million copies, which is often referred to by the abbreviation "Su-Mari." What is this?)
Confidence score: 0.955 A: ドンキーコング (Donkey Kong) *incorrect*
(c), the system predicted correct answers with high confidence scores because sufficient information was provided to narrow down the answer. In contrast, in Examples (b) and (d), the system predicts the answers with high confidence scores, but the answers are incorrect. Example (b) is a question text with contrasting first and second halves, which would be difficult to answer in a situation where only the first half of the question is given. Example
(d) is incorrect because the question text is mostly clear, but does not contain the key information that determines one answer.
## 4 Conclusion
In this study, we constructed two models for answering buzzer quiz questions, which have not been considered in previous research: GPT-only and GPT+DPR. Then, we evaluated the accuracy for various levels of question completeness. Furthermore, we investigated the relationship between the model's internal scores, which were treated as confidence scores, and the accuracy; as a result, the validity of using the internal scores of the models as confidence scores was confirmed.
In the future, we consider the use of more powerful models like FiD (Izacard and Grave, 2021)
or GPT-4 (OpenAI, 2023) to improve the correct answer rate for quizzes. We also would like to validate the differences in performance between our systems and humans.
## Limitations
We built buzzer quiz answering systems. However, they do not take into account the time required to respond, and these systems do not have the ability to generate real-time responses, which is essential in actual buzzer quizzes. Additionally, the experiments in this study were conducted only in Japanese, and it remains unclear whether similar results would be obtained in other languages.
Particularly, English has a significantly different sentence structure compared to Japanese, hence further investigation is necessary to confirm whether appropriate results can be achieved.
## Acknowledgements
This work was partly supported by JSPS KAKENHI Grant Number 21H04901.
## References
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 4171–4186.
Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *Proceedings of the 16th*
Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
(EACL 2021), pages 874–880.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL
2017).
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 6769–
6781.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics (TACL)*, 7.
OpenAI. 2023. Gpt-4 technical report.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Open AI Technical Report.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30 (NIPS 2017)*, pages 5998–6008.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP 2020), pages 38–45.
Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi.
2021. Efficient passage retrieval with hashing for open-domain question answering. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers) (ACL-IJCNLP 2021), pages 979–986. |
schneidermann-etal-2023-probing | Probing for Hyperbole in Pre-Trained Language Models | https://aclanthology.org/2023.acl-srw.30 | Hyperbole is a common figure of speech, which is under-explored in NLP research. In this study, we conduct edge and minimal description length (MDL) probing experiments on three pre-trained language models (PLMs) in an attempt to explore the extent to which hyperbolic information is encoded in these models. We use both word-in-context and sentence-level representations as model inputs as a basis for comparison. We also annotate 63 hyperbole sentences from the HYPO dataset according to an operational taxonomy to conduct an error analysis to explore the encoding of different hyperbole categories. Our results show that hyperbole is to a limited extent encoded in PLMs, and mostly in the final layers. They also indicate that hyperbolic information may be better encoded by the sentence-level representations, which, due to the pragmatic nature of hyperbole, may therefore provide a more accurate and informative representation in PLMs. Finally, the inter-annotator agreement for our annotations, a Cohen{'}s Kappa of 0.339, suggest that the taxonomy categories may not be intuitive and need revision or simplification. | # Probing For Hyperbole In Pre-Trained Language Models
Nina Skovgaard Schneidermann1**, Daniel Hershcovich**2and Bolette Sandford Pedersen1 1Center for Language Technology, 2Department of Computer Science University of Copenhagen [email protected], [email protected], [email protected]
## Abstract
Hyperbole is a common figure of speech, which is under-explored in NLP research. In this study, we conduct edge and minimal description length (MDL) probing experiments for three pre-trained language models (PLMs) in an attempt to explore the extent to which hyperbolic information is encoded in these models. We use both word-in-context and sentencelevel representations as model inputs as a basis for comparison. We also annotate 63 hyperbole sentences from the HYPO dataset according to an operational taxonomy to conduct an error analysis to explore the encoding of different hyperbole categories. Our results show that hyperbole is to a limited extent encoded in PLMs, and mostly in the final layers. They also indicate that hyperbolic information may be better encoded by the sentence-level representations, which, due to the pragmatic nature of hyperbole, may therefore provide a more accurate and informative representation in PLMs.
Finally, the inter-annotator agreement for our annotations, a Cohen's Kappa of 0.339, suggest that the taxonomy categories may not be intuitive and need revision or simplification.
1 Introduction Hyperbole is a common figure of speech that involves the use of exaggerated language for emphasis or effect (Claridge, 2010). Humans exaggerate in a variety of registers and contexts, spanning from the colouring of informal, everyday speech to a literary trope or a rhetorical means of persuasion. Hyperboles intentionally augment or diminish a feature of some referent of discourse, presenting this feature on some more or less abstract scale of magnitude. The task of hyperbole identification poses a challenge to natural language processing in that it is highly pragmatic and utilizes context and background knowledge to distinguish between literal and exaggerated usage of a given lexical unit. As an illustration of the pragmatic nature of hyperbole, we can inspect the following two example sentences, wherein
(1A) is hyperbolic and (1B) is literal:
(1A) I've seen this movie *at least eighty thousand times*.
(1B) These products are tested *at least eighty thousand times*.
In (1A), it is reasonable to assume that the speaker is exaggerating the number of times they have seen this particular movie to emphasize their enjoyment or familiarity with it because this would otherwise be a significant and unrealistic time investment. However, when it comes to a particular product, it has likely gone through rigorous testing and quality control measures, which means that the statement in (1B) can reasonably be interpreted literally.
Hyperbole identification has recently attracted the interest of NLP researchers who have collected datasets manually or semiautomatically and shown that computational modelling of hyperbole is indeed plausible
(Troiano et al., 2018). However, it remains an under-explored area of research in figurative language processing (FLP), primarily because its subjective and contextual nature complicates computational modelling of the phenomenon and makes it challenging to apply a standard for collecting high-quality annotated data (Biddle et al., 2021).
This paper seeks to contribute to the growing research on hyperbole identification in two ways: Firstly, we perform probing tasks to investigate whether pre-trained language models (PLMs) encode hyperbolic information in its representation without fine-tuning on taskspecific data.1In recent years, probing tasks 1By "hyperbolic", we consistently refer to the figure of speech, not the mathematical space.
have emerged as a popular approach in NLP
for interpreting and analyzing model representations, and it has previously been shown that PLMs do encode both simile and metaphorical knowledge (Chen et al., 2022). However, to our knowledge, hyperbole probing remains so far unexplored. Therefore, we replicate edge and minimal description length (MDL) probing experiments for metaphor described by Aghazadeh et al. (2022) on a small hyperbole dataset constructed by Troiano et al.
(2018). We expect that encoding hyperbole may present a larger challenge to PLMs than metaphor because hyperbole knowledge is primarily pragmatic rather than semantic (McCarthy and Carter, 2004).
Secondly, we build an operational taxonomy based on a meta-analysis of the linguistic treatment of hyperbole, and annotate an existing dataset according to said taxonomy (McCarthy and Carter, 2004; Mora, 2009; Claridge, 2010; Burgers et al., 2016; Troiano et al., 2018). We then use these annotations to analyze errors in model predictions to further shed light on the types of hyperboles that may pose a particular challenge to PLMs, as well as when constructing training corpora for the phenomenon. Our work will hopefully provide insight into the challenges of PLMs in identifying hyperbole, as well as contribute to developing an operational annotation standard for computational modelling of hyperbole.2 The remainder of this paper is structured as follows: Section 2 contains an overview of related work in hyperbole research, as well as probing experiments on other figures of speech.
Section 3 provides a background on the linguistic research that is the framework for our operational taxonomy and annotation. Section 4 is a short explanation of probing tasks for PLMs, which we relate to the aim of our experiments. Section 5 outlines our experimental setup and describes the modifications made to the HYPO dataset. Section 6 provides our results and preliminary error analysis, and section 7 is a discussion of said results, as well as ideas for future research. Section 8 contains a summary and conclusions.
## 2 Related Work
In this section, we outline previous research related to both hyperbole and probing experiments on other figures of speech.
Hyperbole in NLP. While tropes such as metaphor and sarcasm have received considerable attention within figurative language processing research (Abulaish et al., 2020; Rai and Chakraverty, 2020; Moores and Mago, 2022),
the automatic modelling of hyperbole is still at a relatively early stage. Research within this area can be roughly split into two objectives, hyperbole identification (HI) and hyperbole generation (HG).
Within the first, and for our purposes most interesting, category, Troiano et al. (2018) introduce the task of hyperbole detection by showing that classical machine learning pipelines can identify hyperboles with beyondchance accuracy. For this purpose, they collect HYPO, the only manually constructed corpus of 709 English hyperboles, and include with the hyperbolic sentence s two contrasting corpora: One consisting of the manually constructed literal paraphrases to each of the sentences, and another consisting of a contrastive non-hyperbolic example using the same minimal lexical unit. They then identify a set of hand-crafted features targeting qualitative and quantitative aspects of exaggeration and report the best-performing classifier to be logistic regression using the literal paraphrases as negative examples, which achieves a 76% F1 score. In the same realm, Kong et al. (2020) address hyperbole detection using deep learning techniques on a constructed Chinese corpus and find that an LSTM with hand-crafted and embedding features produced superior results
(85.4% accuracy). Biddle et al. (2021) construct a multitask learning classification architecture for hyperbole detection using a multitask BERT-based approach, wherein the model is fine-tuned on the HYPO dataset and takes the literal paraphrases as privileged information using triplet sampling. The authors find that their model improves the logistic regression baseline described by Troiano et al. (2018)
by 10%. The authors also devise a series of test sentences to linguistically probe their model for extreme case formulations (ECFs), quantitative, and qualitative hyperboles, as described by Mora (2009), and find that their model particularly excels at hyperboles containing ECFs, which may be due to the lexical substitution between the hyperbole and the literal paraphrase being minimal.
Recent frameworks have also leveraged pretrained language models to generate hyperbole and expand on existing hyperbole data in a semi-supervised way. Specifically, Tian et al. (2021) construct a sentence-level hyperbole generation model by fine-tuning it on sentences from a Reddit corpus using the syntactic pattern known as the "so ... that" pattern, which is said to be a productive strategy for hyperbole (McCarthy and Carter, 2004). The authors annotate the data with semantic relationships within the sentence and feed the annotations to COMeT models (Bosselut et al.,
2019) trained to generate commonsense and counterfactual inference. They then train a classifier to rank hyperbole candidates and use a paraphrase model to generalize to more syntactic patterns. An HG approach by Zhang and Wan (2021) involves constructing a large-scale hyperbole corpus, HypoXL, and proposes an unsupervised approach to hyperbole generation wherein a fine-tuned BART model is used to fill in masked hyperbolic spans.
While these efforts point towards the possibility of successfully training computational models for the task of identifying hyperbole, the research so far also has significant gaps: Firstly, hyperbole in NLP lacks a unifying definition or linguistically motivated formal theory to describe the phenomenon. This is reflected in a lack of a consistent annotation scheme and procedure for hyperbole identification in the available data, which makes hyperbole studies relatively far behind investigations of metaphor, where most annotated data use either the Metaphor Identification Procedure and its extensions (MIP/MIPVU; Group, 2007; Steen et al., 2019), or Conceptual Metaphor Theory (CMT; Lakoff and Johnson, 1980) as a procedure for annotation. This consistency of theoretical framework and annotation procedure makes it easier to perform experiments generalizing across languages and datasets.
Secondly, limited attempts have been made to probe pre-trained language models on how well they encode hyperbole without any finetuning. This makes it unclear whether models simply reconstruct the hyperboles found in the fine-tuning objective, and how well the model is able to learn hyperbolic information in a zero-shot or few-shot setting.
Our experiment is, to our knowledge, the first one to not utilize a fine-tuned model on hyperbolic sentences and to instead use probing methods to test for the encoding of hyperbolic information in PLMs.
## Probing Plms For Figurative Language
Information. Probing techniques provide ways to understand and interpret the internal representations learned by deep neural networks (Belinkov, 2022). They typically involve extracting particular features or representations from a model's intermediate layers to gain insights into its structure or decisionmaking process. Several recent experiments have been designed to probe PLMs for information on figurative language. Namely, Chen et al. (2022) tackle similarity interpretation (SI)
and generation (SG) tasks by probing simile knowledge from PLMs by testing it on similarity triple completion, i.e. sentences that take the form *[NP1] is as [ADJ] as [NP2]*. Their approach is to manually construct masked sentences with this syntactic pattern and predict the candidate words in the masked position.
To that end, they adopt an auxiliary training process with the MLM loss to enhance the prediction diversity of candidate words. While this kind of probing works well to generate particular syntactic constructions, it would be ineffective for hyperbole due to its relatively limited dependence on syntax.
Instead, we choose to adapt several experiments conducted for metaphor probing by Aghazadeh et al. (2022) for hyperbole. The
![3_image_0.png](3_image_0.png)
authors conduct probing in two ways: First, they train a linear probing classifier on 3 different PLMs to evaluate the accuracies and extractabilities with which they encode metaphorical knowledge. Secondly, they use MDL probing to analyze the depth of the encoding of metaphorical information in multi-layer representations. The authors further extend their experiment by generalizing across four datasets and four languages. The results suggest that contextual representations in PLMs do encode metaphorical knowledge, mostly in their middle layers, and that it is possible to transfer this information across languages and datasets provided the annotation is consistent across training and testing sets.
While we can replicate the basic probing experiments, we cannot test the model's generalizability given the scarce hyperbole data.
However, we do expect that it is possible via these techniques to learn something about the internal representations of hyperbole.
## A Taxonomy For Hyperbole 3
In simple terms, hyperbole involves exaggerating a feature's property X beyond what is justified by the literal state of affairs (Claridge, 2010; Troiano et al., 2018 ). Stated in a more discourse-centred way, hyperbole occurs when an expression is more extreme than justified given the ontological referent, i.e. the entity in the world referenced by the text (Burgers et al.,
![4_image_0.png](4_image_0.png)
2016). While much of the work on hyperbole has previously been subsumed under studies of metaphor, humour, and verbal irony, recent corpus linguistic analyses have shed light on more fine-grained characteristics. Namely, the consensus in the treatment of hyperbole in literature is that the phenomenon is, among others, characterized by the presence of extreme case formulations (ECF), the ability of hyperbole to create either extreme possible worlds or downright counterfactual and absurd scenarios, and its augmentation of some property along a qualitative or quantitative scale (McCarthy and Carter, 2004; Mora, 2009; Claridge, 2010).
In the following, we outline some of the key characteristics and visualize them in an operational taxonomy (see Figures 1 and 2).
Dimension. There is widespread agreement that hyperbole occurs on a scale of magnitude along two main dimensions: a quantitative scale and a qualitative scale (Mora, 2009; Claridge, 2010; Troiano et al., 2018). The distinction between these scales refers to whether a hyperbole primarily concerns objective and measurable aspects or subjective and evaluative emotional states of affairs. According to Mora (2009), who conducted a corpus analysis of natural conversation on a 52000 word subset of the British National Corpus (BNC),
quantitative hyperboles comprise 61% of the analyzed hyperboles and include the semantic fields of completeness, universality, measure, and magnitude. Qualitative (evaluative)
hyperboles concern positive or negative sentiments, as well as impact or singularity; e.g.
'shocking', 'smashing' etc. However, an important point to make here is that there is a significant overlap between these dimensions, as hyperboles will generally have an evaluative function: For instance, the expression that somebody has "piles of batteries in their room" could be said to be a negative evaluation of the state of the room, but we choose to annotate such expressions as primarily quantitative, as the exaggerated property is one of measure.
Another potentially relevant distinction is that quantitative hyperboles have a verifiable element, whereas purely qualitative hyperboles often serve to convey an internal subjective mental or emotional state (Claridge, 2010):
For instance, in the statement, *It was the worst* meal I have ever had, the speaker could either be conveying their honest opinion of the meal, or they could be using exaggeration as a figure of speech to emphasize their disappointment with the meal.
Type. We use the term "type" to refer to whether the hyperbole is basic or composite, i.e., whether it stands alone or is combined with another figure of speech. According to Claridge (2010), hyperboles are basic if they preserve the semantic domain of the corresponding literal paraphrase, and composite if it involves a domain transfer where elements of a source domain is mapped onto a target domain. The latter is primarily the case with metaphor and, to a lesser extent, metonymy
<citeclaridge2010hyperbole. In our annotations, we analyze simile as domain-preserving, even though we recognize that simile can be analyzed as an explicit metaphor (Burgers et al.,
2018).
Degree of possibility. This distinction is one of degree and refers to the extent to which hyperboles generate impossible, absurd, or counterfactual scenarios. This is purely pragmatic and influences the degree to which a statement may be perceived as hyperbolic (McCarthy and Carter, 2004; Troiano et al., 2018).
Level of conventionality. This last dichotomy refers to the fact that hyperboles can use either more conventional or more novel and creative language to express exaggeration.
This also impacts the extent to which a statement is perceived as a hyperbole: For instance, to say that one has not seen a person *for ages* is so frequent that it could be considered a latent or dead hyperbole, in the sense that it might not be viewed as intentional exaggeration for a specific purpose (McCarthy and Carter, 2004). However, in our annotation, we do label such frequent sentences as hyperbolic, although a conventionalized one.
## 4 Probing Plms For Hyperbole
Probing language models aims to answer questions related to the model's internal representation, such as the location and depth of the encoding of a linguistic property in the multi-layer representation, or which input features contributed to a particular behaviour of the PLM (Belinkov, 2022). Standard probing methods involve training a linear classifier on top of a PLM to predict a linguistic property of interest, where a high probing performance on the task is associated with the model encoding said property. It is common practice to freeze the parameters of the PLM, which serves to prevent the gradients of the probing classifier from back-propagating into the model and thereby altering its pre-trained representation
(Tenney et al., 2019). Following Aghazadeh et al. (2022), our experiments are not aimed at improving the accuracy of hyperbole identification tasks; we simply want to check the extent to which hyperbole knowledge may be encoded in the base representations. To that end, we employ edge probing, in which the classifier receives span-level representations from the PLM as inputs after they have been projected to a fixed-dimensional layer, 250 in this case. Thus, we define the span input to the PLM as the minimal lexical unit conveying hyperbolic information as given by the HYPO
dataset (Troiano et al., 2018).
One common criticism of edge probing is that it may not be explanatory in the sense that it does not provide insight into whether a model is learning a linguistic property or simply memorizing the task (Belinkov, 2022).
An information-theoretic perspective on addressing this limitation is to combine the probing quality of the classifier with some metric of the effort needed to extract the linguistic knowledge. This approach is known as MDL
probing (Voita and Titov, 2020), wherein effort intuitively refers to the number of steps required by the PLM to encode a compressed representation of the input sequence. Following Aghazadeh et al. (2022), we use the online coding implementation of MDL, which measures a representation's ability to learn from various portions of the data. We report the compression, which is given by N · log2(K).
In the context of language modelling, N refers to the size of the dataset, and K is the set of unique sequences being compressed. A random classifier will have a compression of 1, and increased data compression is associated with a better encoding of the given property.
## 5 Experiments Here We Describe Our Data And Setup.
Dataset and annotation. We utilize HYPO,
a manually constructed English hyperbole dataset (Troiano et al., 2018) of 709 hyperboles with corresponding literal paraphrases, as well as a *minimal units corpus* that provides the contrastive negative (literal) examples for each hyperbole (see examples (1A) and (1B)
in §1).
For the purpose of our experiment, we first discard the corpus of literal paraphrases as we are interested in contrasting the hyperbolic usage of a particular word or phrase with a literal usage of the same word or phrase. It would otherwise not be possible to construct spans.
To obtain span labels for each hyperbole and its negative contrast sentence, we programmatically extract the positions of each minimal
![6_image_0.png](6_image_0.png)
lexical unit and manually adapt the labels as needed; namely, we exclude examples with multiple spans and those without minimal unit contrasts.3 Our final dataset contains 1396 span-labelled hyperbolic and literal sentences, which we split into training (70%), test (20%),
and development (10%) sets.
We meticulously annotate the 63 hyperbolic sentences in the development sample using the operative taxonomy outlined in §3.
4In order to obtain inter-annotator agreement, we enliste the help of additionally 5 annotators, assigning 12-13 sentences to each. As a result, each sentence is annotated twice. We observe a mean Cohen's Kappa of 0.339 (see Figure 3),
suggesting only fair agreement, with particular difficulties on the dimension and type spectra on the taxonomy.
Experimental setup. We conduct edge- and MDL probing experiments for three models, BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and Electra (Clark et al., 2020).
Following Aghazadeh et al. (2022), all the models are initiated from the base versions of the Huggingface Transformer library (Wolf et al., 2020), with 12 layers, 768 hidden size, and 110m parameters. In line with the procedure described in detail by by Tenney et al.
(2019), we use the contextual vector represen-
| Word-in-Context | Sentence Level | | | |
|-------------------|------------------|--------|----------|--------|
| Experiment | Accuracy | µ-F1 | Accuracy | µ-F1 |
| BERT | 0.69 | 0.6895 | 0.72 | 0.7184 |
| RoBERTa | 0.72 | 0.7220 | 0.78 | 0.7762 |
| ELECTRA | 0.73 | 0.7256 | 0.78 | 0.7761 |
tation for each span as inputs to the model, followed by a projection-layer and self-attention pooling to collapse the span vectors down to a fix-length 256-dimensional representation.
The edge probing classifier, which in this case is a single linear layer, is then trained on top of the PLM. We do not change the original hyperparameters; we keep the batch size of 32 and the learning rate of 5e − 5, and train over 5 epochs for each experiment. During model training, the development set is used to monitor the model's performance and as a stopping criterion at each epoch. The MDL probe is based on the same structure as the edge probing experiment (Aghazadeh et al., 2022). One minor change we make to accommodate the small size of our data is to delete the smallest fraction trained on by the MDL probe, as it would otherwise amount to a single example.
We run our experiments in two configurations:
One in which we use the manually labelled hyperbole spans as inputs to the PLM, which follows the classic edge probing procedure. We call this the word-in-context (WiC) representation to emphasize that the model only has access to the rest of the sentence through the context embeddings (Tenney et al., 2019). In the other configuration, which is used as basis for comparison, we feed the entire sentence span to the model - the so-called sentence-level configuration.
## 6 Results
All our results are reported on the test set.
Edge probing results. The edge probing classification results are in Table 1 and the classification scores for the hyperboles and the literal sentences are in Table 2. We only report last layer scores, as we just evaluate the base representations.
| Experiment | Class | Precision | Recall | F1 |
|------------------------------|---------|-------------|----------|------|
| Word-in-Context BERT literal | 0.70 | 0.66 | 0.68 | |
| nonliteral | 0.68 | 0.72 | 0.70 | |
| RoBERTa | literal | 0.73 | 0.71 | 0.72 |
| nonliteral | 0.71 | 0.73 | 0.72 | |
| Electra | literal | 0.74 | 0.71 | 0.72 |
| nonliteral | 0.72 | 0.74 | 0.73 | |
| Sentence Level BERT literal | 0.78 | 0.61 | 0.69 | |
| nonliteral | 0.68 | 0.82 | 0.74 | |
| RoBERTa | literal | 0.80 | 0.74 | 0.77 |
| nonliteral | 0.75 | 0.82 | 0.78 | |
| ELECTRA | literal | 0.84 | 0.69 | 0.76 |
| nonliteral | 0.73 | 0.87 | 0.79 | |
Table 2: Performance metrics for each of the models.
Annotation WiC Sentence Total
QUAL 0.784 0.865 37
QUANT 0.692 0.731 26
PDOM 0.676 0.765 34
SDOM 0.828 0.862 29
NPOSS 0.769 0.821 39
POSS 0.708 0.792 24 CONV 0.806 0.806 36
NCONV 0.667 0.815 27
MDL probing results. We report the compression for each of the experiments in Figure 4. The best layer is consistently near the top layer, but not the top layer itself.
Error analysis. Our error analysis is conducted for the model with the best recall, RoBERTa, and is only conducted for the hyperbolic examples, i.e. the 63 annotated hyperboles in the development set. We choose the best layer based on the compression displayed in Figure 4; i.e. layer 11 for the WiC representation and layer 8 for the sentence-level representation.
Table 3 report the recalls, i.e. the percentages of correctly predicted hyperboles, for each of the annotated categories, for both of our experiments, along with the distributions of each of the annotations on the 63 samples.
## 7 Discussion
We observe notably lower scores than for the metaphor probing experiments across the
BERT WiC. BERT sentence-level.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
RoBERTa WiC. RoBERTa Sentence-level.
![7_image_3.png](7_image_3.png)
ELECTRA WiC. ELECTRA sentence-level.
board: Based on the compression reported for the MDL probes, only reaching up to 1.4 in the best configuration, we can conclude that hyperbolic information does appear to a minor extent to be encoded in PLM representations.
This is in line with our expected hypothesis that encoding hyperbole may pose a bigger challenge given its primarily pragmatic nature, and also fits with the fact that PLMs have been reported to struggle with pragmatic inference and commonsense knowledge (Rogers et al.,
2020). Perhaps more interestingly, we can inspect the compression for each of the 12 layers reported in Figure 4 to understand where hyperbole is best encoded by the representation, which appears to mostly be in the final layers. This is different from metaphor and may lend further credence to the idea that pragmatics is typically encoded deeper into the PLM.
However, since we are employing a very small dataset, the extent to which we can draw definite conclusions is limited. In the future, we would like to extend our experiments to more data and languages to measure generalizability.
Upon analyzing the MDL compressions of the two model representations, we make an intriguing observation that the sentence-level representation consistently outperforms the WiC
representation, with compressions reaching up to 1.4 for the top layer. This discovery raises thought-provoking questions about the amount of hyperbole information inferred by the contextual embeddings, as hyperbole often surpasses the token or phrase level. For example, consider the sentence, "The temperature was so low, I saw polar bears wearing jackets." In this case, the entire complement sentence creates the hyperbole. This leads to discussions about defining the lexical unit of hyperboles for corpus collection and annotation purposes
(Burgers et al., 2016). As for the model representations themselves, while PLMs theoretically encode context in their representation, it is worth exploring how much information is contained within and between subwords in the WiC representation. Employing interpretability metrics could provide further insights into this matter.
Considering the low inter-annotator agreement and that recall seems to generally increase with the frequency of the subcategory in the sample, it is challenging to draw insights from the model error analysis (see Table 3). However, we may tentatively conclude that the models have an easier time with conventional hyperboles, which is the opposite finding to that of Troiano et al. (2018) for traditional machine learning pipelines. Similarly surprisingly is it that the PLMs have better recall for domain-switching hyperboles than domain-preserving ones, which may also be confounded by a strength variable. Furthermore, when manually expecting the false positives, we observe that some sentences predicted to be hyperbolic do indeed contain words and phrases with a potential hyperbolic interpretation, e.g. *paradise* in the sentence
"He thought a place awaited him in paradise""
suggesting that analyzing hyperbole in a larger context might provide further insights.
Finally, the low inter-annotator agreement, particularly on the dimension and type dichotomies, suggests that the hyperbole categories are not intuitively well-understood or discriminated. During discussions with annotators upon completion of the task, we had several instances where overlap of the dimension subcategories was so large that annotators could argue for either one, and it also wasn't clear to annotators when a semantic domainswitch was present. The latter suggests that more linguistic training may be necessary to identify combined figures of speech in context, for instance, through application of the hyperbole identification procedure (HIP) (Burgers et al., 2016). As a consequence, we would like to change our approach to hyperbole annotation in future corpus construction and investigate to which extent these categories are indeed computationally relevant. Our negative findings lend credence to the claim by Biddle et al. (2021) that annotation schemes may present a bottleneck for further development of of the task. We would also like to explore approaches for model evaluation of hyperbole types using conceptual knowledge bases and linguistic resources; namely leveraging framenets to explore their utility for metaphorical hyperboles, as well as investigating templates using particular syntactic patterns for evaluating quantitative hyperboles.
## 8 Conclusions
This study has attempted to probe three pretrained language models (PLMs) for hyperbolic knowledge to better inspect how this information is encoded in their representations.
We find, predictably, that knowledge of hyperbole is only to a limited extend encoded by PLMs, and, somewhat more surprisingly, that sentence-level representations appear to be supperior to word-in-context (WiC) representations, which may further highlight that most hyperbolic information does in fact exist beyond the token or phrase level. In the future, we would like to contribute with more hyperbole data with an operational annotation procedure, extend to cross-lingual experiments, as well as investigate the role of linguistic resources for hyperbole identification.
## References
Muhammad Abulaish, Ashraf Kamal, and Mohammed J
Zaki. 2020. A survey of figurative language and its computational detection in online social networks.
ACM Transactions on the Web (TWEB), 14(1):1–52.
Ehsan Aghazadeh, Mohsen Fayyaz, and Yadollah Yaghoobzadeh. 2022. Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2037–2050, Dublin, Ireland. Association for Computational Linguistics.
Yonatan Belinkov. 2022. Probing Classifiers: Promises, Shortcomings, and Advances. *Computational Linguistics*, 48(1):207–219.
Rhys Biddle, Maciek Rybinski, Qian Li, Cecile Paris, and Guandong Xu. 2021. Harnessing privileged information for hyperbole detection. In *Proceedings* of the the 19th Annual Workshop of the Australasian Language Technology Association, pages 58–67.
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi.
2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics.
Christian Burgers, Britta C Brugman, Kiki Y Renardel de Lavalette, and Gerard J Steen. 2016. HIP: A
method for linguistic hyperbole identification in discourse. *Metaphor and Symbol*, 31(3):163–178.
Christian Burgers, Kiki Y Renardel de Lavalette, and Gerard J Steen. 2018. Metaphor, hyperbole, and irony: Uses in isolation and in combination in written discourse. *Journal of Pragmatics*, 127:71–83.
Weijie Chen, Yongzhu Chang, Rongsheng Zhang, Jiashu Pu, Guandan Chen, Le Zhang, Yadong Xi, Yijiang Chen, and Chang Su. 2022. Probing Simile Knowledge from Pre-trained Language Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 5875–5887, Dublin, Ireland.
Association for Computational Linguistics.
Claudia Claridge. 2010. *Hyperbole in English: A*
Corpus-Based Study of Exaggeration. Cambridge University Press.
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
arXiv preprint arXiv:2003.10555.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-Training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Pragglejaz Group. 2007. MIP: A Method for Identifying Metaphorically Used Words in Discourse. *Metaphor* and Symbol, 22(1):1–39.
Li Kong, Chuanyi Li, Jidong Ge, Bin Luo, and Vincent Ng. 2020. Identifying Exaggerated Language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7024–7034, Online. Association for Computational Linguistics.
George Lakoff and Mark Johnson. 1980. Conceptual metaphor in everyday language. *The journal of Philosophy*, 77(8):453–486.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A Robustly Optimized BERT Pretraining Approach.
Michael McCarthy and Ronald Carter. 2004. "There's millions of them": Hyperbole in everyday conversation. *Journal of Pragmatics*, 36(2):149–184.
Bleau Moores and Vijay Mago. 2022. A survey on automated sarcasm detection on twitter. arXiv preprint arXiv:2202.02516.
Laura Cano Mora. 2009. All or nothing: A semantic analysis of hyperbole. Revista de Lingüística y Lenguas Aplicadas, 4(1):25–35.
Sunny Rai and Shampa Chakraverty. 2020. A survey on computational metaphor processing. *ACM Computing Surveys (CSUR)*, 53(2):1–37.
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
2020. A primer in BERTology: What we know about how BERT works. *Transactions of the Association* for Computational Linguistics, 8:842–866.
Gerard Steen, Aletta G Dorst, J Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Tryntje Pasma.
2019. Mipvu: A manual for identifying metaphorrelated words. Metaphor identification in multiple languages: MIPVU around the world, pages 24–40.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations.
Yufei Tian, Arvind krishna Sridhar, and Nanyun Peng.
2021. HypoGen: Hyperbole Generation with Commonsense and Counterfactual Knowledge.
Enrica Troiano, Carlo Strapparava, Gözde Özbal, and Serra Sinem Tekiroglu. 2018. A computational ex- ˘
ploration of exaggeration.
Elena Voita and Ivan Titov. 2020. InformationTheoretic Probing with Minimum Description Length.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yunxiang Zhang and Xiaojun Wan. 2021. MOVER:
Mask, over-generate and rank for hyperbole generation. *arXiv preprint arXiv:2109.07726*.
## A Fine-Grained Annotation Examples
Table 4 shows example data, along with the spans and annotations (taken from the development set of the data). The annotations are constructed along dimension
(QUANT/QUAL), type (PDOM/SDOM),
possibility (POSS/NPOSS), and conventionality (CONV/NCONV).
| Hyperbole | Literal | Dim. | Type | Poss. | Conv. |
|--------------------------------------------------------------------------------------|-------------------------------------|--------|--------|---------|---------|
| Marriage is the grave of love. | I have gone to visit the grave of a | QUAL | SDOM | NPOSS | CONV |
| friend. | | | | | |
| So much snow that it is like walking | Some stars in the firmament have a | QUANT | PDOM | NPOSS | NCONV |
| in the firmament. | name. | | | | |
| The ancient castle was so big that it took a week to walk from one end to the other. | It took a week to walk from one end | QUANT | PDOM | POSS | CONV |
| of the region to the other. | | | | | |
| His feet are colder than the arctic. | The Antarctic is colder than the | QUANT | PDOM | NPOSS | NCONV |
| Arctic. | | | | | |
Table 4: Sample data with annotations. Token spans are marked by italics around the word or phrase. |
anikina-2023-towards | Towards Efficient Dialogue Processing in the Emergency Response Domain | https://aclanthology.org/2023.acl-srw.31 | In this paper we describe the task of adapting NLP models to dialogue processing in the emergency response domain. Our goal is to provide a recipe for building a system that performs dialogue act classification and domain-specific slot tagging while being efficient, flexible and robust. We show that adapter models Pfeiffer et al. (2020) perform well in the emergency response domain and benefit from additional dialogue context and speaker information. Comparing adapters to standard fine-tuned Transformer models we show that they achieve competitive results and can easily accommodate new tasks without significant memory increase since the base model can be shared between the adapters specializing on different tasks. We also address the problem of scarce annotations in the emergency response domain and evaluate different data augmentation techniques in a low-resource setting. | # Towards Efficient Dialogue Processing In The Emergency Response Domain
Tatiana Anikina DFKI / Saarland Informatics Campus, Saarbrücken, Germany [email protected]
## Abstract
In this paper we describe the task of adapting NLP models to dialogue processing in the emergency response domain. Our goal is to provide a recipe for building a system that performs dialogue act classification and domain-specific slot tagging while being efficient, flexible and robust. We show that adapter models (Pfeiffer et al., 2020) perform well in the emergency response domain and benefit from additional dialogue context and speaker information. Comparing adapters to standard fine-tuned Transformer models we show that they achieve competitive results and can easily accommodate new tasks without significant memory increase since the base model can be shared between the adapters specializing on different tasks. We also address the problem of scarce annotations in the emergency response domain and evaluate different data augmentation techniques in a low-resource setting.
## 1 Introduction
Emergency response is a very challenging domain for NLP for a variety of reasons. First, this domain has strict requirements regarding memory and computational efficiency. Often it is not feasible to load several large NLP models because of the limitations in the available infrastructure (e.g., memory of the machine where the models are running). Second, the environment is often noisy and the speakers communicate using domain-specific lexicon and abbreviations. Third, emergency situation environment is very changeable and the domain may vary from a rescue operation in a car accident to explosions or building collapse. Hence, the ideal dialogue processing system for the emergency response domain should be memory efficient, robust and flexible at the same time.
To address the efficiency aspect we use adapters1 1The code and the pre-trained models are available at https://github.com/tanikina/emergency_response_ dialogue
(Pfeiffer et al., 2020) that were tested on a variety of NLP tasks and have shown a comparable performance with the full fine-tuning while using only 1% of the parameters of the fully fine-tuned models. Adapters are small in size, can be easily shared and combined with different models. This is especially interesting in our use case since we deploy the same base model (bert-base-german-cased)
for several tasks2.
To tackle the problem of noisy, incomplete and domain-specific communication we investigate whether it is possible to boost the performance by integrating additional context and experiment with different ways of encoding it (e.g., by adding speaker, previous turn and dialogue summary information). We also experiment with various linguistic features and test how they affect the performance (e.g., by embedding the POS tags or including the ISO-style dialogue act annotations).
Finally, to simulate the low-resource scenario which is very common for the emergency response domain we reduce the amount of the training and development data to 12% of the original dataset and apply different ways of data augmentation including backtranslation, LM-based word replacements and random edit operations.
Figure 1 provides an overview of different experimental settings addressed in this work. To our knowledge, this is the first work that explores dialogue processing in the emergency response domain with adapters and performs a comprehensive study of the context integration and data augmentation in this setting.
## 2 Related Work
Adapters (Houlsby et al., 2019; Rebuffi et al., 2017)
seem like a natural choice for lightweight and ef2We also tried multilingual BERT but it resulted in worse performance in our pilot experiments. Hence, we decided to focus on the model that was trained on German only and has a reasonably small size (436 MB).
![1_image_0.png](1_image_0.png)
ficient NLP models. Adapters implement a finetuning strategy that involves only a small amount of trainable parameters per task. Each adapter adds a small set of newly initialized and trainable weights at each layer of the transformer architecture (Vaswani et al., 2017). Hence, the original network has mostly fixed parameters and can be efficiently transferred between the tasks. Adapters have shown good performance comparable to the fully fine-tuned models on a variety of tasks including, e.g., sentiment analysis, commonsense reasoning, paraphrase detection and entailment (Pfeiffer et al., 2021) and further modifications and improvements to the original idea were proposed in the recent work by Rücklé et al. (2020); Fu et al.
(2022). Adapters have been successfully used for low-resource speech recognition (Hou et al., 2021),
cross-lingual transfer (Parovic et al., 2022) and tested on the named entity recognition and classification tasks (Lee et al., 2022).
Also, in the field of dialogue processing there is a growing body of work involving adapter models. For example Xu et al. (2021) inject knowledge into pre-trained language models using adapters and explore grounded dialogue response generation with adapters. Another work by Madotto et al. (2020) proposes a simple and efficient method based on residual adapters in the continual learning setting for task-oriented dialogue systems. Wang et al.
(2021) design a GPT-Adapter-CopyNet system that combines adapters and CopyNet modules into GPT2 in order to perform transfer learning and dialogue entity generation. Their system significantly outperforms the baselines models on both DSTC8 and MultiWOZ data.
Efficiency and robustness are crucial in the lowresource setting when we have a limited amount of data. The main objective of data augmentation is to generate new data points by modifying the existing ones through a variety of transformations and while some of these transformations can be very simple such as random token deletion or insertion (Wei and Zou, 2019; Miao et al., 2020), others might require more computation and processing power, e.g., backtranslation (Edunov et al., 2018) or LMbased substitutions (Kobayashi, 2018; Kumar et al.,
2020). Feng et al. (2021) and Chen et al. (2021)
provide comprehensive surveys of the techniques and methods for data augmentation in NLP that served as a motivation for our work.
## 3 Data
The dataset used in our experiments is based on the dialogues collected during several robotassisted disaster response training sessions (Kruijff-
Korbayova et al., 2015; Willms et al., 2019). All dialogues are in German and they represent team communication between a team leader or mission commander and several operators who remotely operate robots in order to explore some area, find hazardous materials, locate fires, damage or victims.
Figure 2 shows a part of one dialogue translated into English.
| speaker | original turn | translation | |
|-----------------------------------------------------------------------|-----------------------|---------------|----|
| TL: | UGV2 | von | Team |
| leader. | UGV2 for team leader. | | |
| UGV: | UGV2, kommen. | UGV2, coming. | |
| brauchen nochmal schärfere Bilder von dem Fass und der Kennzeichnung. | Yes, | UGV2, | we |
| need | again | | |
| sharper pictures of the barrel and the sign. | | | |
| verstanden, können Sie wiederholen? | I | didn't | under |
| stand you, could you repeat? | | | |
| brauchen | wir | | |
| nochmal | bessere | | |
| Bilder, | und | auch | |
| von der Kennzeichnung. | Yes, we need better pictures of the barrel, and also of the sign. | | |
![2_image_1.png](2_image_1.png)
The complete dataset contains 2,542 dialogue turns annotated with dialogue acts and domainspecific slots. For the dialogue act classification we reserve 2,261 turns for training, 281 turns for development and 283 for testing. In the low-resource setting we leave the test set unchanged but reduce the amount of the training samples to 310 (240 in training and 70 in development).
Figure 3 shows the overall distribution of different dialogue act labels in the data and Figure 6 in the appendix provides an example for each label.
There are seven main labels: Call, CallResponse, InfoRequest, InfoProvide, Confirm, Disconfirm, Order and the additional label Other for the cases that do not fit in any of the main categories. The labels are derived based on the domain expertise and represent categories that are important for the emergency response domain. Part of the dataset is also annotated according to the ISO standard for dialogue act classification by Bunt et al. (2020)
![2_image_0.png](2_image_0.png)
and we use these fine-grained labels in some of the experiments described in Section 4.
In the emergency response domain it is very important to correctly recognize and annotate all deployment orders (*Einsatzbefehl* in German). Note that not every utterance classified as request according to the ISO standard would qualify as Order in our domain. E.g., the request *"Could you repeat, please?"* is not a deployment order since it does not require performing a domain-specific action and should be classified as information request
(InfoRequest).
For each turn annotated as Order we also perform the slot tagging. The slots are based on the regulation document of the emergency responders Feuerwehr-Dienstvorschrift (1999). We show an example containing all relevant Order slots in Figure 4. Note that the distribution of slots is quite uneven (see Figure 5). Some slots are present in almost every dialogue turn classified as Order (e.g.,
Unit is present in 67% of the turns and Task appears in 99% of them) while other slots are annotated only in 8% of the turns (Way). Also, the slots can be nested and the same token may belong to several slots. E.g., in *"Schickst du mir noch ein* Foto?" (Will you send me also the photo?), *"du"*
(you) is part of the slot Task and also the slot Unit.
This is the reason why we train separate models for each slot and then combine the results to provide final annotations.
For the slot tagging task we experiment with the full data as well as with the sampled data since the distribution of the negative versus positive instances per label varies a lot (see Figure 5 for the details). For the sampled data we limit the amount of negative samples (turns without the slot annota-
$$\small{\begin{array}{lrcll}\text{A:Trump zur Brandebkämpfung mit Schaumstrahlrohr zum Pkw}&\small{\sf{\color{red}\text{uber die Wiese}}}&\small{\color{blue}\text{vorb}}\\ \text{Einheit}&\small{\begin{array}{l}\text{Autrng}\\ \text{A:Squed to extinguish the fire}\end{array}}&\small{\begin{array}{l}\text{Mttell}\\ \text{with a foam jet nozzle to the car across the moodw!}\\ \text{Means}\end{array}}\\ \small{\begin{array}{l}\text{Unit}\\ \text{Tup}\end{array}}&\small{\begin{array}{l}\text{Task}\\ \text{Was}\end{array}}&\small
![3_image_0.png](3_image_0.png)
Figure 4: Slot Tags for Deployment Order tion) to maximum 80% of the corresponding positive samples. Our intuition is that having uneven distribution with too many negative samples may hinder the model's performance and it might be easier for the adapter model to learn the tagging task on more balanced data. We test this idea and describe our results in the next section.
## 4 Experiments
Our experiments aim to answer the following research questions:
- Can we replace fully tuned BERT models with adapter models for dialogue act classification and slot tagging in the emergency response domain?
- Does integrating context and linguistic features in the model result in better performance?
- Does data augmentation in the low-resource setting help to improve the performance and what are the best ways to augment the data?
## 4.1 Vanilla Bert Vs. Adapters
In order to check whether adapter models work well for dialogue act classification we compare their performance to vanilla BERT fine-tuned on the same data. Both models use the same base bert-base-german-cased model as a backbone and are trained for 20 epochs. The best performing checkpoint is selected based on the loss on the development set. When only the current turn embeddings are used as input we obtain 0.82 F1 score with the fine-tuned BERT and 0.80 F1 with the adapter model (Table 1). Adding speaker to the input results in 0.80 F1 for BERT and 0.79 F1 score for adapter.
We also compare the performance of the fully tuned BERT vs. adapters on the slot tagging task.
Since the slots can be nested we train a separate model for each slot type (i.e., 5 adapters or 5 finetuned BERT models per setting). We use BIO notation for each slot type and compute F1 scores based on the token-level annotations. The results are summarized in Table 2. Since the distribution among the slots is uneven we also experiment with the setting where we reduce the amount of negative samples and balance the data.
It is clear from the evaluation results presented in Table 2 that adapters consistently outperform BERT on the slot tagging task and also benefit from the sampling of negative examples. Reducing the amount of negative samples gives us 9% increase in the macro F1 score for adapters while it does not bring any improvement for the vanilla BERT and effectively hurts the model's performance in terms of micro F1 (0.86 vs. 0.99). It turns out that we can use fewer parameters of the adapter model to achieve better results with the balanced classes.
Interestingly, the fully fine-tuned BERT model trained on the full data achieves the same macro F1 as the model trained on the sampled data but their micro F1 scores differ (0.99 vs. 0.86). One possible explanation is that since tuning of the BERT model involves more parameters that need to be updated in each iteration the training process becomes less stable. The difference in training stability between the adapters and the fully fledged fine-tuning in the low-resource setting is an interesting research question that needs further investigation.
| Setting | Fine-tuned BERT | Adapter |
|--------------------------|-------------------|-----------|
| OnlyTurn | 0.82 | 0.80 |
| Speaker+Turn | 0.80 | 0.79 |
| Context+Speaker+Turn | 0.91 | 0.84 |
| Context+AllSpeakers+Turn | 0.90 | 0.85 |
| Summary+Speaker+Turn | 0.80 | 0.73 |
Table 1: Macro F1 scores on the dialogue act classification task (BERT vs. adapters).
| Slot Label | Adapt+full | Adapt+sampled | BERT+full | BERT+sampled |
|--------------|--------------|-----------------|-------------|----------------|
| Unit | 0.93 | 0.92 | 0.82 | 0.80 |
| Task | 0.75 | 0.82 | 0.77 | 0.41 |
| Means | 0.86 | 0.89 | 0.82 | 0.88 |
| Goal | 0.57 | 0.81 | 0.59 | 0.67 |
| Way | 0.70 | 0.80 | 0.57 | 0.77 |
| Macro F1 | 0.76 | 0.85 | 0.71 | 0.71 |
| Micro F1 | 0.99 | 0.99 | 0.99 | 0.86 |
## 4.2 Contextual Augmentation
In the next set of experiments we look into the impact of context on the dialogue act classification
(Table 1). First, we train both vanilla BERT and adapter model using only the current turn text as an input (OnlyTurn). This results in 0.82 F1 score for BERT and 0.80 F1 for the adapter. Next, we add the speaker information (Speaker+Turn) and obtain 0.80 for BERT and 0.79 for the adapter model.
Moreover, adding the previous dialogue turn as additional context (Context+Speaker+Turn) results in a big improvement for both fine-tuned BERT
(0.91 F1) and adapter (0.84 F1).
To integrate more context into the model input we also experiment with extractive summarization of the dialogue using the Summarizer model introduced in Miller (2019). We limit dialogue context to 10 previous turns and set the number of summary sentences to 3 (Summary+Speaker+Turn). However, this additional information seems to confuse the model which is especially striking in the case of adapters. Compared to the baseline Speaker+Turn
(0.79 F1) the average score drops by 6 point (0.73 F1). The BERT model performance does not decrease in this setting compared to the baseline but it also does not show any improvement.
As a baseline for further experiments we use the version that encodes only the speaker information and the current turn text (Speaker+Turn).
The main reason to select this setting as a baseline instead of OnlyTurn with a slightly higher macro F1 score is the fact that there is an important difference in how these two models annotate instances of the class Order. Speaker+Turn model has a better F1 score for the class Order (0.86)
compared to the OnlyTurn version (0.77) and since correct processing of orders is crucial for our domain we choose this setting for the baseline. Another reason to pick Speaker+Turn and not the bestperforming version that includes additional context
(Context+AllSpeakers+Turn) is the fact that it is simpler and quicker to compute.
## 4.3 Adding Linguistic Information Dialogue Act Classification
The subset of our dataset also provides the ISObased annotations of dialogue acts according to Bunt et al. (2020) which we use to train a separate classifier that generates fine-grained ISO labels. These labels are added to the input of our main classifier that performs the domain-specific dialogue act classification. The distribution of the labels according to the ISO standard is shown in Table 7 in the appendix. We split the data into 1,224 samples for training and 170 for development. Although the overall accuracy of this classifier is only 62% it performs differently on different labels. The categories that have many instances in the training set (e.g., AutoPositive and TurnAccept) achieve F1 score around 0.81 and 0.82 but most of the rare labels are being misclassified.
After training the adapter-based classifier on the ISO labels we run it on our training, development and test data to annotate the turns with additional ISO labels. Here we do not use the gold labels to simulate a realistic scenario when gold annotations are not available. The generated labels are then translated into German and added to the turn text with a special [SEP] token as a separator. The evaluation results are summarized in Table 3. The first column shows the scores for each of the dialogue acts when the baseline model (Speaker+Turn) is used. The second column shows the performance when additional (generated) labels are added to the input. We obtain an overall 3% improvement in the F1 scores with the additional ISO labels. We also consider a simplified version of the labels when we automatically map the original ISO taxonomy to the closest equivalents in the domain-specific taxonomy (see Table 8 in the appendix). The performance of the adapter model with such simplified dialogue act annotations is slightly worse than the ISO version (0.81 vs. 0.82).
## Slot Tagging
To investigate whether linguistic annotations are also useful for the slot tagging task we annotate each word with its part of speech tag using the SpaCy library and 7 coarse categories including noun, pronoun, verb, preposition, adverb, adjective and other. For each tag we generate an embedding and combine it with the BERT embedding of the corresponding token. To process the combined embeddings we use a custom adapter head that adds two linear layers on top of the Transformer model, the tanh activation function and the final fully connected layer that outputs scores for the slot labels
(BIO tags). The evaluation results of the adapter models with and without embedded POS information are presented in Table 4. Although the overall F1 score does not change we can see an improvement for almost every category (Task, Means and Way) except for the category Goal3. It is possible that for the class Goal the over-reliance on the POS information leads to some misclassifications.
## 4.4 Data Augmentation In The Low-Resource Setting
In order to simulate a low-resource scenario for the dialogue act classification we reduce the amount of the training and development data. The test set is left unchanged but the training set is reduced from 2,261 to 240 instances and the development set from 281 to 70 instances. As shown in Table 5 the performance drops to 0.47 F1 score on the test set when the model is trained on the reduced data.
First, we experiment with backtranslations using the NLPAug library. We translate between German and English and then back to German with Helsinki-NLP/opus-mt models and add these additional data as new instances with the same labels to the training data. This gives us an average improvement of 9 points in the F1 score. We also test whether adding more backtranslated samples helps to improve the performance and add the samples translated from German to French and back. However, doubling the amount of backtransalted data does not bring any further improvements (see Table 5). When looking at the generated backtranlations we notice that many instances are correct and represent good paraphrases. E.g., "Und guck mal ob du ein genaues Bild von diesen Samples kriegen kannst" (And see if you can get a clear picture of these samples) was backtranslated into "Und sehen Sie, ob Sie ein genaues Bild von diesen Proben bekommen können" which is semantically equivalent. However, sometimes the generated samples contain repetitions, hallucinations or incorrect translations. For example, *"Einsatzleiter"* (group leader) was translated into *"Operations Managers"*
which is not a valid term in the emergency response domain.
Although backtranslation brings a substantial boost in performance, it also involves computationally heavy translation models, requires some extra processing time4and may not be feasible for some language pairs. Hence, we also experiment with cheaper and less time- and resourceconsuming methods for data augmentation. First, we apply random masking to different proportions of the original tokens and generate substitutions using bert-base-german-cased language model. Table 6 shows in each row the proportion of the replaced tokens and each column shows the number of augmentation rounds. When selecting a new word for the masked token we set the parameter topk to 10 and iterate over all generated tokens to select the one that is different from the original word and does not represent a subtoken starting with \#\#, we also ignore all
[unused punctuation] tokens. Some of the LM-
4It takes around 7 minutes to backtranslate 240 instances.
| Dialogue Act | Adapter Baseline | Adapter+ISO DA | Adapter+simple ISO DA |
|----------------|--------------------|------------------|-------------------------|
| Call | 0.88 | 0.85 | 0.84 |
| CallResponse | 0.84 | 0.81 | 0.80 |
| InfoRequest | 0.98 | 0.83 | 0.97 |
| InfoProvide | 0.87 | 0.88 | 0.88 |
| Confirm | 0.44 | 0.52 | 0.49 |
| Disconfirm | 0.44 | 0.73 | 0.73 |
| Order | 0.86 | 0.83 | 0.79 |
| Other | 1.00 | 1.00 | 1.00 |
| Macro F1 | 0.79 | 0.82 | 0.81 |
Table 3: Performance of the adapter model with and without additional ISO dialogue act labels (F1 scores).
| Slot Label | Adapter Baseline | Adapter+POS |
|--------------|--------------------|---------------|
| Unit | 0.92 | 0.92 |
| Task | 0.82 | 0.85 |
| Means | 0.89 | 0.91 |
| Goal | 0.81 | 0.76 |
| Way | 0.80 | 0.82 |
| Macro F1 | 0.85 | 0.85 |
based replacements are near-synonyms and match the context quite well (e.g., substituting *"Realbild"*
(real picture) with *"Gesamtbild"* (overall picture)).
However, sometimes the substituted token changes the meaning significantly. For instance, when replacing "ja" in *"ja kommt sofort"* (yes, coming immediately) with *"Geld"* (money) we generate a nonsensical in our domain sentence *"Geld kommt* sofort" (money comes immediately). We believe that this might be the reason why the performance of this approach is not consistently better as in case of backtranslations, although some settings (e.g.,
60% LM replacements 5x) achieve similar performance. Also, we observe that replacing more than 60% tokens or augmenting more than 10 times is not beneficial for the model and leads to decreased performance.
The simplest and cheapest way of augmenting the data in terms of both time and computational resources is random editing. We add new instances by applying three different operations to randomly selected tokens: insert, delete or swap and similarly to the case of LM substitutions we experiment with different settings w.r.t. the number of edited tokens as well as the amount of the augmented data. As shown in Table 6 we get an overall improvement over the baseline model with 0.47 F1 score but there is no clear pattern regarding how many times or how many tokens should be changed. The experimental results show that the gains from adding new edited data are diminishing after 5 rounds of augmentation and the best performance can be achieved with 5 augmentation rounds and 40% edited tokens (Macro F1 0.57).
## Training Details
All the experiments reported in this paper were performed on a a single GPU NVIDIA GeForce RTX
2080. We use adapter-transformers library to train the adapter models and transformers library for tuning the standard BERT models. As a base model we use bert-base-german-cased. We run the SpaCy library for the POS tag annotation with de_core_news_sm model for German and Summarizer for generating dialogue summaries.
Baktranslations are performed with the data augmentation library NLPAug. Further details about exact versions of the software and training hyperparameters can be found in the appendix (Figures 9 and 10).
## 5 Discussion
Our experiments show that adapter models can be successfully applied in a very specific and challenging domain such as emergency response. Although fine-tuning BERT gives a slightly better performance (0.80 vs. 0.79 F1 for the baseline), adapters are much more efficient in terms of memory and
| Dialogue Act | Baseline (full) | Baseline (low-resource) | Backtranslated 1x | Backtranslated 2x |
|----------------|-------------------|---------------------------|---------------------|---------------------|
| Call | 0.88 | 0.32 | 0.68 | 0.63 |
| CallResponse | 0.84 | 0.35 | 0.78 | 0.69 |
| InfoRequest | 0.98 | 0.87 | 0.70 | 0.79 |
| InfoProvide | 0.87 | 0.59 | 0.65 | 0.71 |
| Confirm | 0.44 | 0.56 | 0.66 | 0.65 |
| Disconfirm | 0.44 | 0.29 | 0.35 | 0.35 |
| Order | 0.86 | 0.76 | 0.64 | 0.67 |
| Other | 1.00 | 0.05 | 0.00 | 0.00 |
| Macro F1 | 0.79 | 0.47 | 0.56 | 0.56 |
Table 5: Performance of the adapter model on the full and low-resource dialogue act classification with and without backtranslations (F1 scores).
| LM-based word replacements | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------|------|------|------|------|
| % | 1x | 2x | 5x | 10x |
| 0.1 | 0.50 | 0.50 | 0.49 | 0.51 |
| 0.2 | 0.45 | 0.49 | 0.48 | 0.52 |
| 0.4 | 0.54 | 0.53 | 0.55 | 0.54 |
| 0.6 | 0.52 | 0.53 | 0.56 | 0.54 |
| Random edits: insert, delete, swap % 1x 2x 5x 10x 0.1 0.48 0.52 0.55 0.53 0.2 0.54 0.51 0.56 0.55 0.4 0.52 0.52 0.57 0.54 0.6 0.56 0.54 0.53 0.54 | | | | |
Table 6: Dialogue act classification performance (macro F1) on the augmented data. The baseline macro F1 is 0.47.
computational resources. As shown in Table 10 in the appendix an average size of an adapter model is 3.6MB compared to 436.4MB of the fully tuned BERT model. Also, adapters are very flexible and can be easily combined and stacked in different ways to perform a variety of annotations on top of the same base model.
We found that contextual augmentation (Context+AllSpeakers+Turn setting) is very beneficial for adapters and helps to increase F1 score up to 6 points compared to the baseline version. However, including longer context and dialogue summary actually confuses the model and hurts the performance. Hence, we conclude that for the dialogue act classification task the best way of integrating context is to combine the current and the previous turn with the speaker information. Adding linguistic features such as ISO dialogue acts and POS
tags also helps to boost the performance but to a smaller extent (e.g. adding an ISO label increases F1 score by up to 3 points). The slot tagging task with adapters outperforms vanilla BERT in all settings and greatly benefits from the data balancing and negative sampling.
In the low-resource setting with 12% of the original data we find that adding backtranslated samples helps to improve the performance by up to 9 F1 points. However, multiple backtranslations are not necessarily useful and performance plateaus after one round of augmentation. LM-base word replacements and random edits can achieve similar performance but have a greater variance across the settings with different number of edits and augmentation rounds.
The dialogue turn tokens have different relevance to the task in the emergency response domain and replacing words blindly may result in unrealistic or simply wrong instances. E.g., *"kommen"* (coming) has a specific meaning according to the communication protocol used by the responders and represents an instance of the CallResponse class. Replacing *"kommen"* with *"gehen"* (going)
or another similar verb results in the wrong interpretation and should not be labeled as CallResponse. In the future we would like to explore various constraints on the token substitutions and include more 219 domain knowledge and ontology information to perform targeted replacements and edits.
Active learning for text classification (Schröder and Niekler, 2020; Zhang et al., 2022) is another approach that may work well in our domain. We have already shown that adapters benefit from balancing the data and it would be interesting to see whether they further improve by learning in stages when the model starts with he balanced dataset with easyto-classify labels and the difficulty level gradually increases with each epoch. Also, in the future we would like to explore conditional text generation with the models like BART (Lewis et al., 2019)
or T5 (Raffel et al., 2020) which can be trained to generate text given the corresponding label.
## 6 Limitations
The main limitation of our work is the focus on the specific domain and the dataset that is not yet publicly available. However, we should note that the dataset can be requested for further research and replication studies and it will be released in the future.We believe that testing adapters with different settings in the emergency response domain is a valuable contribution but we are also aware of the fact that the dataset used in our experiments is not large or exhaustive enough to cover all the variety of topics relevant for the emergency response.
For example, our data cover cases of explosions, leakages of hazardous materials and building collapse but do not include any dialogues for open field rescue operations or car accidents.
Another issue that is worth mentioning is the fact that all recordings were collected during the training sessions and not the actual missions. Hence, the responders might be under less pressure than in a real life-threatening situation and their communication might be more of a textbook case. However, all simulations had a realistic setting that includes several operators, robots and points of interest (objects or locations) and we believe that the recorded communication is representative for the domain in question.
## 7 Conclusion
In this work we evaluate the performance of several adapter models in the emergency response domain.
We demonstrate that adapters show similar performance to the vanilla fine-tuned BERT in the baseline setting (0.79 vs. 0.80 F1 score) while using only 1% of the parameters of the fully tuned model.
Our experiments show that including additional context such as previous turn and speaker can improve the performance by up to 6 points in F1 score.
Also adding linguistic annotations such as ISO dialogue acts boosts the performance in dialogue act classification. The slot tagging task mostly benefits from the balanced data. As for the low-resource setting, it shows a substantial improvement over the baseline (9 F1 points) when a single round of backtranslated turns is added to the training set.
## Acknowledgements
The author was supported by the German Ministry of Education and Research (BMBF) in the project CORA4NLP (grant Nr. 01IW20010).
We also thank the anonymous reviewers for their valuable feedback as well as Prof. Josef van Genabith, Dr. Simon Ostermann and Bernd Kiefer for their advice and support of this project.
## References
Harry Bunt, Volha Petukhova, Emer Gilmartin, Catherine Pelachaud, Alex Chengyu Fang, Simon Keizer, and Laurent Prévot. 2020. The ISO standard for dialogue act annotation, second edition. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 549–558.
Jiaao Chen, Derek Tam, Colin Raffel, Mohit Bansal, and Diyi Yang. 2021. An empirical survey of data augmentation for limited data learning in nlp. *Transactions of the Association for Computational Linguistics*, 11:191–211.
Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489–500, Brussels, Belgium. Association for Computational Linguistics.
Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for NLP. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 968–988, Online. Association for Computational Linguistics.
Feuerwehr-Dienstvorschrift. 1999. Feuerwehrdienstvorschrift 100 führung und leitung im einsatz:
Führungssystem, bundesamt für bevölkerungsschutz und katastrophenhilfe.
Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, and Hungyi Lee. 2022. AdapterBias: Parameter-efficient token-dependent representation shift for adapters in
NLP tasks. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2608–2621, Seattle, United States. Association for Computational Linguistics.
Wenxin Hou, Hanlin Zhu, Yidong Wang, Jindong Wang, Tao Qin, Renjun Xu, and Takahiro Shinozaki. 2021.
Exploiting adapters for cross-lingual low-resource speech recognition. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 30:317–329.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*.
Sosuke Kobayashi. 2018. Contextual augmentation:
Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452–457, New Orleans, Louisiana. Association for Computational Linguistics.
Ivana Kruijff-Korbayova, Francis Colas, Mario Gianni, Fiora Pirri, Joachim Greeff, Koen Hindriks, Mark Neerincx, Petter Ogren, Tomáš Svoboda, and Rainer Worst. 2015. Tradr project: Long-term human-robot teaming for robot assisted disaster response. KI -
Künstliche Intelligenz, 29.
Varun Kumar, Ashutosh Choudhary, and Eunah Cho.
2020. Data augmentation using pre-trained transformer models. In *Proceedings of the 2nd Workshop* on Life-long Learning for Spoken Language Systems, pages 18–26, Suzhou, China. Association for Computational Linguistics.
Jaeseong Lee, Seung-won Hwang, and Taesup Kim.
2022. FAD-X: Fusing adapters for cross-lingual transfer to low-resource languages. In *Proceedings of* the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 57–64, Online only. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Annual Meeting of the Association for Computational Linguistics.
Edward Ma. 2019. Nlp augmentation.
Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul A. Crook, Bing Liu, Zhou Yu, Eunjoon Cho, and Zhiguang Wang. 2020. Continual learning in task-oriented dialogue systems. In Conference on Empirical Methods in Natural Language Processing.
Zhengjie Miao, Yuliang Li, Xiaolan Wang, and Wang Chiew Tan. 2020. Snippext: Semi-supervised opinion mining with augmented data. *Proceedings* of The Web Conference 2020.
Derek Miller. 2019. Leveraging BERT for extractive text summarization on lectures. *CoRR*,
abs/1906.04165.
Marinela Parovic, Goran Glavas, Ivan Vulic, and Anna Korhonen. 2022. Bad-x: Bilingual adapters improve zero-shot cross-lingual transfer. In North American Chapter of the Association for Computational Linguistics.
Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021.
AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487–503, Online. Association for Computational Linguistics.
Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun ´
Cho, and Iryna Gurevych. 2020. Adapterhub: A
framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46–54.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67.
Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In *NIPS*.
Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the efficiency of adapters in transformers. In *Conference on Empirical Methods in Natural Language Processing*.
Christopher Schröder and Andreas Niekler. 2020. A
survey of active learning for text classification using deep neural networks. *ArXiv*, abs/2008.07267.
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *NIPS*.
Weizhi Wang, Zhirui Zhang, Junliang Guo, Yinpei Dai, Boxing Chen, and Weihua Luo. 2021. Task-oriented dialogue system as natural language generation. Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.
Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In *Conference on Empirical* Methods in Natural Language Processing.
Christian Willms, Constantin Houy, Jana-Rebecca Rehse, Peter Fettke, and Ivana Kruijff-Korbayová.
2019. Team communication processing and process analytics for supporting robot-assisted emergency response. In IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2019, Würzburg, Germany, September 2-4, 2019, pages 216–221.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yan Xu, Etsuko Ishii, Zihan Liu, Genta Indra Winata, Dan Su, Andrea Madotto, and Pascale Fung.
2021. Retrieval-free knowledge-grounded dialogue response generation with adapters. In *Workshop on* Document-grounded Dialogue and Conversational Question Answering.
Zhisong Zhang, Emma Strubell, and Eduard Hovy. 2022.
A survey of active learning for natural language processing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6166–6190, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
## A Appendix
| label | original | translation |
|-------------------------------------|----------------------------------------|----------------------------------------|
| Call | UGV 2 von Teamleader. | UGV 2 for team leader. |
| CallResponse | UGV 2, kommen. | UGV 2, coming. |
| InfoRequest | Du sprachst eben von einer anderen | You were talking about another floor, |
| Ebene, habt ihr die schon erreicht? | have you already reached it? | |
| InfoProvide | Foto ist erstellt und geteilt. | Photo was made and shared. |
| Confirm | Ja, mache ich. | Yes, I will do this. |
| Disconfirm | Wir haben aktuell immer noch Probleme | We are currently still having problems |
| mit der Steuerung. | with the controls. | |
| Order | Schickst du mir noch mal ein aktuelles | Will you send me again the current |
| Foto euren Standortes? | photo of you position? | |
| Figure 6: Dialogue Act Examples | | |
| ISO Dialogue Act | Samples | ISO Dialogue Act | Samples |
|--------------------|-----------|-----------------------|-----------|
| Allo-positive | 4 | Agreement | 5 |
| Auto-negative | 5 | DeclineOffer | 5 |
| AddressRequest | 10 | ChoiceQuestion | 10 |
| Instruct | 10 | SetQuestion | 11 |
| Pausing | 17 | Promise | 18 |
| AcceptOffer | 19 | CheckQuestion | 20 |
| TurnTake | 20 | Disconfirm | 24 |
| Other | 29 | Question | 36 |
| Confirm | 37 | PropositionalQuestion | 38 |
| Offer | 39 | Answer | 45 |
| AcceptRequest | 47 | Request | 107 |
| Auto-positive | 159 | TurnAccept | 207 |
| TurnAssign | 217 | Inform | 255 |
Table 7: Distribution of the ISO dialogue acts.
| Simplified Dialogue Act | Original ISO Labels |
|---------------------------|----------------------------------------------------------|
| Call | TurnTake, TurnAssign |
| CallResponse | TurnAccept |
| InfoRequest | Question, ChoiceQuestion, SetQuestion, CheckQuestion, PropositionalQuestion |
| InfoProvide | Answer, Inform, Offer, Promise, AddressRequest, Instruct |
| Confirm | Confirm, Agreement, AcceptOffer, AcceptRequest |
| Disconfirm | Disconfirm, Auto-negative |
| Order | Request |
| Other | All other labels |
Table 8: Mapping between the ISO labels and the domain-specific dialogue acts.
| Library | Version | URL | Reference |
|------------------------------------------------------|-----------|---------------------------------|------------------------|
| Adapter-transformers | 3.1.0 | https://github.com/adapter-hub/ | Pfeiffer et al. (2020) |
| adapter-transformers | | | |
| Transformers | 4.18.0 | https://github.com/huggingface/ | Wolf et al. (2020) |
| transformers/ | | | |
| Summarizer | 0.10.1 | https://github.com/dmmiller612/ | Miller (2019) |
| bert-extractive-summarizer | | | |
| NLPAug | 1.1.10 | https://github.com/makcedward/ | Ma (2019) |
| nlpaug | | | |
| SpaCy | 3.2.4 | https://spacy.io/ | NA |
| Table 9: External libraries used in the experiments. | | | |
| Parameters | Adapt Dialogue Acts | BERT Dialogue Acts | Adapt Slots | BERT Slots |
|--------------------|------------------------|------------------------|---------------|--------------|
| Base Model | bert-base-german-cased | bert-base-german-cased | | |
| Learning Rate | 1e-4 | 1e-4 | 1e-3 | 1e-5 |
| Number of Epochs | 20 | 20 | 12 | 12 |
| Batch Size | 32 | 16 | 16 | 16 |
| Optimizer | AdamW | AdamW | AdamW | AdamW |
| Avg. Training Time | 6 min | 22 min | 4 min | 4 min |
| Avg. Model Size | 3.6MB | 436.4MB | 3.6MB | 434.1MB |
Table 10: Training parameters for different model types. The best performing model was selected based on the loss on the development set. |
mai-carson-berndsen-2023-already | {I} already said that! Degenerating redundant questions in open-domain dialogue systems. | https://aclanthology.org/2023.acl-srw.33 | Neural text generation models have achieved remarkable success in carrying on short open-domain conversations. However, their performance degrades significantly in the long term, especially in their ability to ask coherent questions. A significant issue is the generation of redundant questions where the answer has already been provided by the user. We adapt and evaluate different methods, including negative training, decoding, and classification, to mitigate the redundancy problem. We also propose a simple yet effective method for generating training data without the need for crowdsourcing human-human or human-bot conversations. Experiments with the BlenderBot model show that our combined method significantly reduces the rate of redundant questions from 27.2{\%} to 8.7{\%}, while improving the quality of the original model. The code, dataset, and trained models can be found at our repository. | # I Already Said That! Degenerating Redundant Questions In Open-Domain Dialogue Systems
Long Mai, Julie Carson-Berndsen ML-Labs, School of Computer Science, University College Dublin, Ireland [email protected], [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Neural text generation models have achieved remarkable success in carrying on short opendomain conversations. However, their performance degrades significantly in the long term, especially in their ability to ask coherent questions. A significant issue is the generation of redundant questions where the answer has already been provided by the user. We adapt and evaluate different methods, including negative training, decoding, and classification, to mitigate the redundancy problem. We also propose a simple yet effective method for generating training data without the need for crowdsourcing human-human or human-bot conversations.
Experiments with the BlenderBot model show that our combined method significantly reduces the rate of redundant questions from 27.2% to 8.7%, while improving the quality of the original model. The code, dataset, and trained models can be found at our repository1.
## 1 **Introduction**
Despite recent significant improvements in text generation techniques, open-domain dialogue generation is nowhere near perfect. Large-scale neuralbased models, such as GPT-3 (Brown et al., 2020)
and BlenderBot (Roller et al., 2020b; Chen et al.,
2021; Shuster et al., 2022), still present many issues including but not limited to contradiction (Li et al.,
2021a), "hallucinations" (Shuster et al., 2021), offensive and toxic responses (Roller et al., 2020a; Dinan et al., 2022), which undermine their use in real-world applications. As a result, many social chatbots (Hakkani-Tur, 2021) still rely heavily on hand-designed dialogue managers and scripted responses. End-to-end neural-based models are only used for handling unexpected inputs, but only for a few turns, before giving back control to the handdesigned dialogue manager (Konrád et al., 2021).
Although neural-based models have shown superior performance in generating statement responses, 1https://github.com/mailong25/redundancy-dialogue Figure 1: Examples of redundant questions generated by the BB3 model.
they are also reported to ask undesirable questions such as redundant, irrelevant, and topic-changing questions (Konrád et al., 2021; Paranjape et al.,
2020). This is because the models are often trained on short conversations, which results in generating questions that prioritize local appropriateness over global cohesiveness. This is why the quality of generated questions often degrades rapidly when the conversation is carried on over multiple turns.
To address difficulties of long-term dialogue generation, a multi-session dialogue dataset (MSC)
(Xu et al., 2021) has been proposed with an average conversation turn of 53; this is significantly higher than any of the previous datasets, of 2-15 turns. The authors also proposed a memory-augmented model that makes use of summary of the conversation for generating global-coherent responses. However, the issue of redundant questions is still present. Figure 1 shows examples of redundant questions generated by the recent Blenderbot 3.0 (BB3) chatbot (Shuster et al., 2022), partly trained on MSC
with memory-augmentation. Redundant questions can be categorized into explicit and implicit. Explicit are questions that have been asked previously in the dialogue context while implicit are the ones in which the answers are already given or can be inferred but was not previously asked.
The problem of redundant questions can also be attributed to the maximum likelihood training objective that does not explicitly teach the model what kinds of questions it should not ask. Although several techniques, such as unlikelihood training
(Welleck et al., 2019), negative training (He and Glass, 2019), and contrastive learning (Su et al.,
2022; Su and Collier, 2022) have been proposed to mitigate undesirable behaviors of maximum likelihood training, none of them have been focused on preventing bad questions from being generated.
This study is the first to address the problem of redundant questions in open-domain dialogue systems. We adapt and evaluate different methods, including unlikelihood training, contrastive training, contrastive decoding, and classification to mitigate the redundancy problem. Whether a question is redundant or not is determined based on the previous speaker's personas, which are input to the model alongside the truncated dialogue history. As there are no relevant datasets for this task, we created the first one, called the Non-Redundant Questions
(NRQ) dataset, to facilitate training. To demonstrate the effectiveness of the proposed method, we apply it to improve the question-asking ability of the Blenderbot 2.0 model (BB2) (Chen et al., 2021)
- a simpler version, but comparable to the recent BB3 model. Experimental results show that our proposed methods reduce the redundant question rate of the original BB2 model from 27.2% to 8.7%,
which results in better overall performance.
## 2 **Related Work** 2.1 **Decoding Methods**
The generation of redundant questions is highly related to repetition problems in neural-based dialogue models in which the model tends to copy words and phrases from the preceding context (Xu et al., 2022). Prior studies often tackled this issue by controlling the decoding stage. Several beam search variants and stochastic decoding methods, such as top-k (Fan et al., 2018) or nucleus sampling (Holtzman et al., 2019), have been proposed to reduce the level of repetition by favoring less likely but non-repetitive candidates. Contrastive decoding (Su and Collier, 2022) is also proposed to mitigate the repetition issue. Another simple yet effective approach is N-gram blocking (Kulikov et al.,
2018) in which N-gram presented in the preceding context are blocked during candidate expansion. However, the solution is not suitable for dealing with implicit or explicit redundant questions with no N-gram in common.
## 2.2 **Training Methods**
Although improved decoding algorithms can reduce redundant question rates, the underlying issue has not been resolved: the model still assigns a high probability for undesirable response candidates.
Several training methods have been proposed to address this problem. For dialogue response generation, (He and Glass, 2019) proposed a negative training framework to resolve the problem of malicious and generic responses. (Welleck et al., 2019)
stated that the standard likelihood training objective for text generation is a flawed approach, which contributes significantly to the generation of undesirable behaviors. They then proposed an unlikelihood training objective that forces unlikely generations to be assigned a lower probability by the model. The method is then applied to reduce not only dull and repetitive sentences but also inconsistent and contradictory responses (Li et al., 2021b).
Another approach to discourage the model from generating undesirable texts is contrastive training
(Cao and Wang, 2021; Li et al., 2022), which aims to differentiate the embedding representations of positive and negative responses.
## 3 **Methodology** 3.1 **Dialogue Generation**
The goal of open-domain dialogue generation is to predict the target response y = (y1, y2*, .., y*n),
given the dialogue context x = (x1, x2*.., x*m) and augmented information s = (s1, s2*, .., s*k). The dialogue context x1:m is the concatenated history utterances from both speakers while the augmented information s1:k can be scenarios, external knowledge, speaker personas, etc.
Since using the full dialogue context is computationally expensive, prior studies often use a truncated one, e.g. last 128 tokens, alongside personas from both speakers. The introduction of personas is to make sure the newly generated response is consistent with what has been said in the dialogue history. In this study, we propose another utility of speaker personas: to avoid asking redundant questions. For example, if one of the partner's personas is *I am a vegan*, then the chatbot should not ask a question like *What is your favorite kind of meat?*.
To augment the generation with personas, we use the Fusion-in-Decoder (Izacard and Grave, 2020)
Figure 2: Response generation with augmented speaker personas using Fusion-in-Decoder method.
![2_image_0.png](2_image_0.png)
as shown in Figure 2. We prepend each of the top N personas to the dialogue context and encode them independently using an encoder. The decoder then attends to the concatenated encoding outputs to produce a final response. To extract speaker personas from conversation history, we use a pretrained BB2 Memory Decoder from ParlAI2. All partner personas are used to produce the responses.
## 3.2 **Likelihood Training**
Given a dataset D+ = {(x
+, s+, y+)} collected from real human conversations, we train a response generation model using standard maximum likelihood estimation (MLE)
LMLE(pθ, x+, s+, y+) =
$$-\sum_{t=0}^{|y^{+}|}l o g\;p_{\theta}(y_{t}^{+}|x^{+},s^{+},y_{<t}^{+})$$
where x
+ is the truncated dialogue context, s
+ is the speaker personas, y
+ is the next target response, and y
+
tis the t-th token of y
+.
## 4 **Redundancy Mitigation Methods** 4.1 **Unlikelihood Training**
We apply the unlikelihood loss (UL) (Welleck et al.,
2019) to discourage the model from generating undesirable responses. Given an incoherent dataset D− = {(x−, s−, y−)}, the loss is computed as:
LUL(pθ, x−, s−, y−) =
$$-\sum_{t=0}^{\left|y^{-}\right|}\beta(y_{t}^{-})\;l o g(1-p_{\theta}(y_{t}^{-}|x^{-},s^{-},y_{<t}^{-}))$$ we obtain the calculation.
where y− is the undesirable response, and s− contains partner's persona that make y− a redundant question. β(y
−
t
) is a candidate-dependent scale that controls how much the token t-th should be penalized. We set β = 0 for the first two tokens of the question and for tokens that do not belong to the question. The β values for the remaining tokens are set to 1.
2https://parl.ai/docs/zoo.html We train the model with a mixture of likelihood and unlikelihood losses to avoid degradation. The likelihood is performed on D+ to push up the probability of tokens in the positive response y
+ while unlikelihood is performed on D− to push down the probability of tokens in the undesirable response y−. It should be noted that samples from D+ and D− can overlap or differ. In this study, we generate D− using the same samples from D+.
For each positive sample (x
+, s+, y+) in D+,
we generate the corresponding negative one
(x−, s−, y−) by keeping x and y: x− = x
+; y− =
y
+. We then append an additional partner persona sneg to the existing personas: s− = s
+ +sneg. The negative persona sneg is chosen so that its presence will turn the positive response y
+ into a negative one. For example, if the positive response is What is your favourite kind of meat?, then an example of sneg should be *I am a vegan*. A simple strategy to generate sneg is to extract the partner persona from the next response in the dialogue. Figure 3 illustrates how a positive and a negative training sample are generated.
As the samples from D+ and D− overlap, the total loss can be now written as follow:
$${\mathcal{L}}={\mathcal{L}}_{M L E}(p_{\theta},x,s^{+},y)+{\mathcal{L}}_{U L}(p_{\theta},x,s^{-},y)$$
## 4.2 **Classification**
As the model can produce multiple responses given the input, we can filter out candidates containing redundant questions. Hence, we can build a binary classification model that can detect whether a generated response contains such questions. The model takes three inputs: the truncated dialogue context, partner speaker persona, and the generated response. Rather than inputting all speaker personas at once for a single prediction, we split them into multiple one-sentence personas and perform multiple predictions. If any of the predictions indicate redundancy in the generated response, we classify it as containing redundant questions.
To generate training data for the classification model, we use the same D+ and D− sets discussed in Section 4.1. For the redundant class, we pair up the negative partner persona sneg with the target response y and dialogue context x. Meanwhile, we replace sneg with a partner persona presented in s
+
to form the non-redundant class.
We fine-tune three pre-trained language models, namely XLnet (Yang et al., 2019), RoBERTa
(Liu et al., 2019), and DeBERTa (He et al., 2020),
for classification task. Each training sample is formed by concatenating the dialogue context, partner speaker persona, and generated response with a separator token in between.
## 4.3 **Contrastive Decoding**
To address the repetition problem in text generation, (Su et al., 2022) has proposed a new approach called contrastive decoding. Since the method was originally designed for decoder-only language models (e.g., GPT2), we made some modifications to adapt it to encoder-decoder models.
Given the context x and prefix decoded text y<t, the selection of the output token yt follows:
yt = arg max v∈V (k) {(1 − α) × model confidence z }| { pθ (v | y<t, x) − α × max{sim hv, hx n j } | {z } degeneration penalty }
Where V
(k)is the set of top-k predictions from the model's probability distribution pθ (· | y<t).
The representation of token v, denoted as hv, refers to the decoder output (i.e., the hidden state of the final layer) given the concatenation of the prefix y<t and v, as well as the encoder outputs of the dialogue context x. Similarly, the representation hx n j is the decoder output of the j-th token of the n-th turn in the dialogue context. hx n j is computed based on the concatenation of the prefix x n
≤j and x n j
, as well as the encoder outputs of dialogue context x
<n. sim(·, ·) computes the cosine similarity between token representations while α ∈ [0, 1]
controls the importance of model confidence and degeneration penalty. Model confidence refers to the probability assigned by the model to the candidate v, while the degeneration penalty measures the similarity between the candidate v and all tokens presented in the dialogue context. We set α = 0.4 based on the results presented in (Su et al., 2022).
## 4.4 **Contrastive Training**
Contrastive learning can be used to discourage model from generating undesirable responses (Cao and Wang, 2021). We propose a contrastive training objective that drives the model to favour the generation of non-redundant questions over redundant ones. Given a positive sample q
+ = (x, s+, y)
from D+ and its corresponding negative sample q− = (x, s−, y) from D−, the objective is to differentiate the question representations between the two samples. Assume that we have a positive set P = {q
+
1 = q
+, q+
2
, .., q+m} generated from q
+
and a negative set N = {q
−
1 = q−, q−
2
, .., q−m} generated from q−, the contrastive loss for q can be written as follow:
$$l=\frac{-1}{\left(\frac{|P|}{2}\right)}\sum_{\begin{array}{c}{{q_{i}^{+},q_{j}^{+}\in P}}\\ {{q_{i}^{+}\neq q_{j}^{+}}}\end{array}}\log\frac{\exp(\mathrm{sim}({\mathbf{h}_{i}^{+},\mathbf{h}_{j}^{+}}))}{\sum_{\begin{array}{c}{{q_{k}\in P\cup N}}\\ {{q_{k}\neq q_{i}^{+}}}\end{array}}\exp(\mathrm{sim}({\mathbf{h}_{i}^{+},\mathbf{h}_{k}}))}$$
Where h
+
iand h
+
j are representations of q
+
iand q
+
j
, while hk is representation of qk, which can be either a sample of the positive or negative set.
Sample construction. Given a positive sample q
+ = (x, s+, y), we generate its sibling positive/negative samples by keeping x and y but appending an additional partner persona sadd to the existing personas s
+. sadd is chosen from a persona pool S, which is a collection of all speaker personas extracted from the training set. First, we rank personas in S based on their similarity scores to the context x and then pick the top-k personas as sadd.
After that, we use the redundant classifier from Section 4.2 to classify the each input (x, sadd, y). If the prediction is redundant, we use sadd to generate a negative sample, otherwise we use it to construct a positive one.
Sample representation (h∗). We use the outputs of the decoder's last layer to form the representation h for each positive and negative sample. More specifically, we only average over tokens that belong to the question in the target response y.
Training. To avoid model degradation, we combine contrastive loss with the original MLE loss L = LMLE + LCL.
## 4.5 **Unlikelihood Training With Augmented Loss**
We reuse the sample construction method from Section 4.4 to increase the coverage of the training set and boost the performance of unlikelihood training.
More specifically, we augment the original unlikelihood loss with loss computed from sibling positive and negative samples as follow:
$$\begin{array}{c}{{{\mathcal L}_{a u g}=\frac{1}{|P|}\sum_{i=1}^{|P|}{\mathcal L}_{M L E}(p_{\theta},x,s_{i}^{+},y)}}\\ {{+\frac{1}{|N|}\sum_{j=1}^{|N|}{\mathcal L}_{U L}(p_{\theta},x,s_{j}^{-},y)}}\end{array}$$
![4_image_0.png](4_image_0.png)
Where P and N are the positive and negative sets. s
+
iis the speaker persona of i-th sample from P and s
−
jis the speaker persona of j-th sample from N. Samples from P and N are included in the same batch of training. Using augmented loss helps the model better distinguish between negative and positive samples, which reduces the number of redundant questions while maintaining quality of the original model.
## 5 **Experiments Setup** 5.1 **Nrq Dataset**
As there is no available dataset addressing the problem of redundant questions, we create a new nonredundant question set called NRQ, which consists of positive training samples for D+ and negative samples for D−. To form our D+, we gather training samples from Wizard of Wikipedia (WoW)
(Dinan et al., 2018), Empathetic Dialogues (ED) (Rashkin et al., 2018), Blended Skill Talk (BST)
(Smith et al., 2020), Multi-Session Chat (MSC)
(Xu et al., 2021), and Wizard of Internet (WOI)
(Komeili et al., 2021) datasets. Note that we only select samples with questions presented in the target response. To extract speaker personas from conversation history, we use a pre-trained Dialogue Summarization Model from ParlAI.
To create negative samples for the NRQ dataset, we use the approach outlined in Section 4.1, illustrated in Figure 3. Specifically, we convert each positive sample (x, s+, y) into a negative one by augmenting the speaker personas s
+ with a negative partner persona sneg (e.g. *I have two girls*),
which we obtain from the partner personas of the next dialogue turn (e.g. *Yes, I have two girls*),
denoted as s*next*. However, this procedure poses two challenges: (i) s*next* may contain multiple personas, some may not be relevant to the questions posed in the target response y, (ii) s*next* may be entirely irrelevant, for instance if the next dialogue turn is off-topic or the persona extractor model fails to identify the correct personas. As a result, we rely on human annotators to select only the relevant sneg from s*next* and discard samples where no relevant sneg can be found. The number of samples in NRQ is 100,181 before filtering, and 50,178 after filtering. We split the final dataset into 46,286 for training, 2,000 for validation, and 1,892 for testing.
Redundant question classification. As described in Section 4.2, we use D+ and D− to generate training data for our the redundant question classifier, resulting a total of 48,297 and 45,494 samples for redundant and non-redundant class respectively. In addition, we incorporate human annotation results mentioned above where the negative persona sneg is deemed irrelevant to the question. This provides an additional 39,271 non-redundant samples.
## 5.2 **Bb2 Baseline**
As training an end-to-end generation model from scratch is computationally expensive, we choose to use the pre-trained BB2 model (3 billion parameters) as baseline. Our goal is to reduce the number of redundant questions generated by the model.
The BB2 model is fine-tuned from the Blenderbot1 model (Roller et al., 2020b) on BST, MSC, and WOI datasets. For decoding, we use beam search with 4-gram blocking to prevent repetitive questions from generating. The maximum number of tokens in the dialog context is set to 128.
## 5.3 **Evaluation**
Perplexity (PPL) is a metric to measure how well a generation model predicts a response. We want the model to output low perplexity scores for good and coherent responses while producing high perplexity scores for undesirable responses such as redundant questions in our case.
Diversity measures lexical diversity of generated texts, which is computed based on corpus-level repetition at different n-gram levels as follow:
diversity =Q4n=2(1.0 −
rep−n 100 ), where **rep-n**
= 1.0 −
|unique n−*grams*(C)| |total n−*grams*(C)|
; C is a collections of generated responses by the model.
Coherence measures the semantic similarity between dialogue context and generated response.
We use SimCSE following (Su et al., 2022) to compute the similarity in the embedding space.
Redundant question rate is the percentage of generated questions that are redundant. For automatic evaluation, we use the classifier presented in Section 4.2 to check if a question is redundant.
Automatic evaluation is essential for hyperparameter tuning and model selection. To automatically estimate quality of generated texts, we first perform self-chat, i.e two chatbots chatting with each other, to generate 50 bot-bot dialogues using BB2 Baseline. To make sure each dialogue is different, we seed each one with a human-human conversation (25 turns) from the MSC Session1&2 and then generate 40 more turns. After that, we calculate diversity, coherence, and redundant rate scores based on the generated questions.
Human evaluation. We recruited human annotators from Amazon Mechanical Turk to conduct 50 human-bot conversations for evaluation. We seed each human-bot conversation with 25 turns from MSC Session1&2. The human and the bot, i.e BB2 Baseline, are asked to continue each seeded conversation for 40 turns. After that, we asked another group of annotators to manually check if each generated question is a redundant question based on the entire conversation.
Method comparison. We propose a method for a fair comparison between the BB2 Baseline and other approaches mentioned in Section 4. Instead of having each model conduct its own conversations, we use responses generated by the BB2 Baseline as a ground for comparison. For each
| Models | Acc | F1-score | |
|-----------|-------|------------|------|
| Redundant | Nonredundant | | |
| XLNet | 88.3% | 85.9 | 90.0 |
| RoBERTa | 88.6% | 86.3 | 90.1 |
| DeBERTa | 88.2% | 86.5 | 89.5 |
of the BB2-generated questions, we regenerate it with the compared models and then recompute the evaluation scores. In cases where a model does not generate any questions at the end, we replace the end-of-sentence token with the most probable question-words token (e.g. what, how, when, etc)
and continue the decoding process.
## 5.4 **Training Configuration**
We fine-tune the BB2 Baseline using one A100 GPU with an Adam optimizer. The learning rate and batch size are set to 5e-6 and 8. The model is fine-tuned in a multi-task fashion using samples from BST, MSC, WOI, and NRQ datasets. We draw samples from each task equally in a roundrobin fashion. We use early stopping based on the combined score of test set perplexity and redundant question rate of bot-bot conversations.
## 6 **Experiment Results**
Redundant question classification. We first report performances of our redundant question classifier in Table 1. As can be seen, all three models perform similarly well, with RoBERTa achieving the highest accuracy of 88.6%. Therefore, we choose RoBERTa to automatically calculate the redundant question rate of the generation models in subsequent analyses.
Conversation length vs redundant rate. As shown in Figure 4, the redundant question rate increases significantly with respect to the length of the conversation. For BB2 Baseline, the rate is 18.4% at turn 30. The number further increases by another 8.1% when the conversation reaches 65 turns. However, this issue is not a concern in previous studies as most evaluate the chatbots on a short conversation setting (less than 10 turns). The increase in redundant rate can be attributed to the limited number of topics the chatbot can initiate.
When the conversation is prolonged, it often revisit topics that have already been discussed.
![6_image_0.png](6_image_0.png)
Truncated context length vs redundant rate. The limitation of 128 tokens for truncated dialogue in the BB2 Baseline could be the cause of higher redundant question rate. Increasing the truncation length could be considered as a possible solution to address this issue. To investigate this hypothesis, we utilized the MSC model (Xu et al., 2021), which was specifically trained on the MSC dataset to effectively handle long conversations. In Figure 4, the results demonstrate a significant reduction in redundancy rates by extending the truncation length.
For conversations with a length of 30, the redundancy rate decreased from 18.4% (truncated at 128)
to 10.4% (truncated at 1024). However, it is important to note that despite these improvements, they still fall short compared to the BB2 Tuned model using our proposed methods, while also incurring increased training and inference costs.
Bias in training data. Another contributing factor to the redundant issue is the bias of the BB2 Baseline towards common topics, such as pets, hobbies, and careers, which increases the likelihood of repeating the same topics over again. An explanation can be seen in Table 2, which shows the most frequent redundant questions generated by the BB2 Baseline. Obviously, these questions strongly overlap with the most frequent questions in the training data of BB2 Baseline, demonstrating the model's tendency to generate the most probable questions as a downside of maximum likelihood estimation.
Mitigation methods comparison. We apply mitigation methods to improve the performance of BB2 Baseline. As can be seen in Table 3, our proposed methods are able to not only reduce the redundancy rate but also increase the diversity score. Discussions for each method is provided as below:
![6_image_1.png](6_image_1.png)
BB2 Baseline does not perform well in most metrics. The negative perplexity is significantly lower than the positive one, indicating that the model is more likely to generate redundant questions instead of target questions. Additionally, the low measure of lexical diversity suggests that the model tends to produce common but repetitive questions, resulting in a high redundant rate of 26.5%.
Contrastive decoding can significantly reduce the redundant question rate to 17% without the need to retrain the model. This improvement can be explained by the significant increase in diversity score, indicating that the model favors less repetitive questions. We also observe an improvement in coherence score, which is consistent with prior studies (Su et al., 2022).
Unlikelihood training obtains the best redundant rate at 7.5%, thanks to significant increases in negative PPL and diversity score. The slight increase in positive PPL suggests a tiny degradation in the quality of the generated questions, which demonstrates by a lower coherence score. However, using augmented loss and further combining with contrastive decoding bring considerable improvements across all metrics, especially in diversity score.
Contrastive training reduces the redundant rate to 11.4% but it is still pales in comparison to unlikelihood training. Also, using contrastive training comes at the cost of question degeneration, as demonstrated by the increase in both negative and positive PPL. It can be seen that the model is confused between the task of degenerating redundant questions versus degenerating all questions.
| Methods | Positive | Negative PPL | Coherence | Diversity | Redundant |
|------------------------|------------|----------------|-------------|-------------|-------------|
| PPL | rate | | | | |
| BB2 Baseline | 12.2 | 7.9 | 0.34 | 0.02 | 26.5% |
| Contrastive decoding | - | - | 0.36 | 0.07 | 17.0% |
| Contrastive training | 14.4 | 69.6 | 0.34 | 0.11 | 11.4% |
| Unlikelihood training | 12.5 | 37.5 | 0.32 | 0.09 | 7.50% |
| + Augmented loss | 12.7 | 38.0 | 0.33 | 0.12 | 6.44% |
| + Contrastive decoding | - | - | 0.33 | 0.15 | 6.66% |
| Methods | Redundant |
|-------------------------------|-------------|
| BB2 Baseline | 27.2% |
| Classification | 15.4% |
| Unlikelihood | 11.4% |
| Unlikelihood + Classification | 8.7% |
Table 4: Evaluation results on 50 human-bot dialogues
| BB2 Baseline | BB2 Tuned |
|----------------|-------------|
| 37.8% | 62.1% |
Table 5: Win rate of the BB2 Baseline and our proposed approach.
Human evaluation. Table 4 reports human evaluation results on 50 human-bot dialogues. The results indicate that the BB2 Baseline still has a high redundant question rate of 27.2%, highlighting the need for effective solutions. While using a redundant classifier alone can reduce the rate significantly to 15.4%, this is still much higher than the 11.4% rate achieved with unlikelihood training. The failure of the redundant classifier can be attributed to two reasons: (1) Since the problem of assigning high probabilities to redundant questions remains unaddressed, it is not uncommon that the model generates all candidate responses with redundant questions (2) With an accuracy of 88.6%, the redundant classifier can misclassify some redundant questions as non-redundant. Nevertheless, using classification on top of unlikelihood training can reduce the redundant rate further to 8.7%.
We can see that the improvements in human-bot conversations are considerably lower compared to bot-bot conversations. This is due to the fact that human-bot conversations are typically more varied and less predictable than bot-bot conversations.
In contrast, bot-bot conversations tend to revolve around common topics and employ a shared vocabulary that is well-represented in the training data of the NRQ dataset.
Finally, we asked human annotators to compare the overall question-asking ability of the original BB2 Baseline with our proposed method combing unlikelihood training with redundant classifier.
For each pair of comparisons, two annotators were asked to choose which of the two generated responses was better, or if they were both equally good or bad. In cases where the annotators disagreed, we manually reviewed the case and determined the correct annotation. When calculating the win rate, we excluded comparison cases where both responses were equal in quality. According to the results presented in Table 5, our approach significantly outperforms the original model.
## 7 **Predictions Analysis**
We present several successful and failed cases of the proposed approach. Table 6 compares perplexities of the BB2 Baseline and BB2 tuned with unlikelihood training in generating the target questions based on different partners' personas. On the one hand, if the partner's persona, i.e *I have a dog*, has nothing to do with the target question, i.e *What do* you do for a living, then there is not much difference in perplexity between BB2 Baseline and BB2 Tuned. This suggests that the proposed negative training method does not badly affect the questionasking ability of the original BB2 Baseline. On the other hand, if the presence of the partner's persona, i.e *I'm a software engineer*, turns the target question into a redundant question, then the perplexity of the BB2 Tuned model increases significantly to 68.5 while the number for BB2 Baseline remains
| Questions | Partner's persona | Question perplexity Baseline Tuned | |
|----------------------------------|--------------------------|--------------------------------------|------|
| I have a dog. | 2.04 | 2.42 | |
| What do you do for a living? | I'm a software engineer. | 2.06 | 68.5 |
| I'm still in high school. | 2.07 | 3.41 | |
| I like to read books. | 2.56 | 2.49 | |
| Do you have any pets? | I have a cat and a dog. | 2.52 | 50.0 |
| My apartment doesn't allow pets. | 2.48 | 2.93 | |
Table 6: Example perplexities of the BB2 Baseline and BB2 Tuned with NRQ when predicting the target questions.
very low, at 2.06. We also note that one of the weaknesses of the BB2 Tuned model is that it is still unable to spot redundant questions if they are not clearly related to the partner's persona. For instance, the partner's persona *I'm still in high school* can be interpreted as *I don't have a job* but the BB2 Tuned model still assigns a very low perplexity for the redundant question *What do you do for a living*.
## 8 **Conclusion**
Asking good questions is an important skill for a chatbot to engage in a long-term conversation.
This study first introduces the problem of redundant questions in neural text generation models.
Several methods, including negative training, decoding, and classification have been proposed to lower the probabilities of these undesirable questions. We also create the first-of-its-kind dataset named NRQ dataset containing training samples with a redundant question assigned to each dialogue context and speaker personas. We validate our methods with the BB2 model and observed a significant reduction of the redundant rate, which results in a higher rating for the questioning skills of the chatbot. We believe the proposed approaches and datasets will be beneficial for building future dialogue systems.
## 9 **Acknowledgement**
This work was funded by Science Foundation Ireland through the SFI Centre for Research Training in Machine Learning (18/CRT/6183).
## Limitations
Resource hungry. One of the difficulties in deploying large-scale neural text generation models is resource allocation and latency problems. For example, the BB2 Baseline 3B requires at least a 16GB GPU and a couple of seconds to generate the response using one Tesla V100. As our approach requires inputting all of the partner's persona alongside dialog context, it almost doubles the inference time and increases the use of GPU memory significantly. As a result, it is not resource-friendly when the conversation is prolonged. A possible solution to this is to use the RAG retriever model to select a few relevant partner personas and incorporate only these into the input. However, this may be difficult to do so as we might not know what questions are going to be generated during decoding. A
redundant question might be generated because a partner's persona is missing.
The redundant rate is still high. Although the proposed approach significantly reduces the redundant question rate, the number still remained relatively high, at 8.7%. We believe this is a much more serious issue compared to other challenges, such as contradiction or "hallucinations", as it is very uncomfortable for the user to repeat the same information or discuss a topic multiple times during the conversation. As mentioned in the previous sections, one of the main weaknesses of the finetuned model is the failure in recognizing the indirect relations between a speaker persona and a redundant question. We believe the problem can be addressed by scaling up the size of the NRQ dataset to cover more of these difficult cases. Better data augmentation techniques can also be used to diversify redundant questions and negative partner personas.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Shuyang Cao and Lu Wang. 2021. CLIFF: Contrastive learning for improving faithfulness and factuality in
abstractive summarization. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 6633–6649, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Moya Chen, Douwe Kiela, Mojtaba Komeili, Spencer Poff, Stephen Roller, Kurt Shuster, Arthur Szlam, Jason Weston, and Jing Xu. 2021. Blender bot 2.0: An open source chatbot that builds long-term memory and searches the internet. https://parl.ai/
projects/blenderbot2/.
Emily Dinan, Gavin Abercrombie, Stevie A Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, Verena Rieser, et al. 2022. Safetykit: First aid for measuring safety in open-domain conversational systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers). Association for Computational Linguistics.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. *arXiv preprint arXiv:1811.01241*.
Angela Fan, Mike Lewis, and Yann Dauphin. 2018.
Hierarchical neural story generation. arXiv preprint arXiv:1805.04833.
Dilek Hakkani-Tur. 2021. Alexa prize socialbot grand challenge year iv. In Alexa Prize SocialBot Grand Challenge 4 Proceedings.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. *arXiv preprint* arXiv:2006.03654.
Tianxing He and James Glass. 2019. Negative training for neural dialogue response generation. *arXiv* preprint arXiv:1903.02134.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. *arXiv preprint arXiv:1904.09751*.
Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282.
Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2021.
Internet-augmented dialogue generation. *arXiv* preprint arXiv:2107.07566.
Jakub Konrád, Jan Pichl, Petr Marek, Petr Lorenc, Van Duy Ta, Ondˇrej Kobza, Lenka Hylová, and Jan `
Šedivy. 2021. Alquist 4.0: Towards social intelli- `
gence using generative models and dialogue personalization. *arXiv preprint arXiv:2109.07968*.
Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. arXiv preprint arXiv:1811.00907.
Weizhao Li, Junsheng Kong, Ben Liao, and Yi Cai.
2022. Mitigating contradictions in dialogue based on contrastive learning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2781–2788.
Zekang Li, Jinchao Zhang, Zhengcong Fei, Yang Feng, and Jie Zhou. 2021a. Addressing inquiries about history: An efficient and practical framework for evaluating open-domain chatbot consistency. arXiv preprint arXiv:2106.02228.
Zekang Li, Jinchao Zhang, Zhengcong Fei, Yang Feng, and Jie Zhou. 2021b. Addressing inquiries about history: An efficient and practical framework for evaluating open-domain chatbot consistency. arXiv preprint arXiv:2106.02228.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Ashwin Paranjape, Abigail See, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, and Christopher D Manning. 2020. Neural generation meets real people: Towards emotionally engaging mixed-initiative conversations. *arXiv preprint* arXiv:2008.12348.
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic opendomain conversation models: A new benchmark and dataset. *arXiv preprint arXiv:1811.00207*.
Stephen Roller, Y-Lan Boureau, Jason Weston, Antoine Bordes, Emily Dinan, Angela Fan, David Gunning, Da Ju, Margaret Li, Spencer Poff, et al. 2020a. Opendomain conversational agents: Current progress, open problems, and future directions. arXiv preprint arXiv:2006.12442.
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020b. Recipes for building an open-domain chatbot. *arXiv preprint* arXiv:2004.13637.
Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. arXiv preprint arXiv:2104.07567.
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. 2022.
Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188.
Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. *arXiv preprint arXiv:2004.08449*.
Yixuan Su and Nigel Collier. 2022. Contrastive search is what you need for neural text generation. arXiv preprint arXiv:2210.14140.
Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, and Nigel Collier. 2022. A contrastive framework for neural text generation. arXiv preprint arXiv:2202.06417.
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. *arXiv* preprint arXiv:1908.04319.
Jin Xu, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, and Jian Li. 2022. Learning to break the loop:
Analyzing and mitigating repetitions for neural text generation. *arXiv preprint arXiv:2206.02369*.
Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. *arXiv preprint arXiv:2107.07567*.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information processing systems*, 32. |
kodama-etal-2023-knowledge | Is a Knowledge-based Response Engaging?: An Analysis on Knowledge-Grounded Dialogue with Information Source Annotation | https://aclanthology.org/2023.acl-srw.34 | Currently, most knowledge-grounded dialogue response generation models focus on reflecting given external knowledge. However, even when conveying external knowledge, humans integrate their own knowledge, experiences, and opinions with external knowledge to make their utterances engaging. In this study, we analyze such human behavior by annotating the utterances in an existing knowledge-grounded dialogue corpus. Each entity in the corpus is annotated with its information source, either derived from external knowledge (database-derived) or the speaker{'}s own knowledge, experiences, and opinions (speaker-derived). Our analysis shows that the presence of speaker-derived information in the utterance improves dialogue engagingness. We also confirm that responses generated by an existing model, which is trained to reflect the given knowledge, cannot include speaker-derived information in responses as often as humans do. | # Is A Knowledge-Based Response Engaging?: An Analysis On Knowledge-Grounded Dialogue With Information Source Annotation
Takashi Kodama1**, Hirokazu Kiyomaru**1 Yin Jou Huang1, Taro Okahisa2, **Sadao Kurohashi**1,3 1Kyoto University, 2Shizuoka University, 3National Institute of Informatics
{kodama, kiyomaru, huang, kuro}@nlp.ist.i.kyoto-u.ac.jp [email protected]
## Abstract
Currently, most knowledge-grounded dialogue response generation models focus on reflecting given external knowledge. However, even when conveying external knowledge, humans integrate their own knowledge, experiences, and opinions with external knowledge to make their utterances engaging. In this study, we analyze such human behavior by annotating the utterances in an existing knowledge-grounded dialogue corpus. Each entity in the corpus is annotated with its information source, either derived from external knowledge (database-derived) or the speaker's own knowledge, experiences, and opinions
(speaker-derived). Our analysis shows that the presence of speaker-derived information in the utterance improves dialogue engagingness.
We also confirm that responses generated by an existing model, which is trained to reflect the given knowledge, cannot include speakerderived information in responses as often as humans do.
## 1 Introduction
More and more dialogue research has utilized external knowledge to enable dialogue systems to generate rich and informative responses (Ghazvininejad et al., 2018; Zhou et al., 2018; Moghe et al., 2018; Dinan et al., 2019; Zhao et al., 2020). The major focus of such research is in how to select appropriate external knowledge and reflect it accurately in the response (Kim et al., 2020; Zhan et al., 2021; Rashkin et al., 2021; Li et al., 2022).
However, as shown in Figure 1 1, a good speaker not only informs the dialogue partner of external knowledge but also incorporates his or her own knowledge, experiences, and opinions effectively, which makes the dialogue more engaging. The extent to which models specializing in reflecting 1Examples of dialogues presented in this paper are originally in Japanese and were translated by the authors.
![0_image_0.png](0_image_0.png)
given external knowledge can achieve such an engaging behavior has not yet been explored quantitatively.
In this study, we first analyze how humans incorporate speaker-derived information by annotating the utterances in an existing knowledge-grounded dialogue corpus. Each entity in the utterances is annotated with its information source, either derived from external knowledge (database-derived)
or the speaker's own knowledge, experiences, and opinions (speaker-derived). The analysis of the annotated dataset showed that engaging utterances contained more speaker-derived information.
In addition, we train a BART-based response generation model in a standard way, i.e., by minimizing perplexity, and investigate the extent to which it incorporates speaker-derived information.
The result showed that the response generation model did not incorporate speaker-derived information into their utterances as often as humans do.
This result implies that minimizing perplexity is insufficient to increase engagingness in knowledgegrounded response generation and suggests room for improvement in the training framework.
## 2 Information Source Annotation
This section describes the annotation scheme for information sources and the annotation results.
## 2.1 Scheme
We annotate Japanese Movie Recommendation Dialogue (JMRD) (Kodama et al., 2022) with information sources2. JMRD is a human-to-human knowledge-grounded dialogue corpus in Japanese.
A recommender recommends a movie to a seeker.
Each utterance of the recommender is associated with movie information as external knowledge.
Each piece of knowledge consists of a knowledge type (e.g., title) and the corresponding knowledge contents (e.g., "Marvel's The Avengers").
In this study, we extract entities from the recommender's utterances and annotate them with their information source. Entities are nouns, verbs, and adjectives and are extracted together with their modifiers to make it easier to grasp their meanings.
Entities are extracted using Juman++ (Tolmachev et al., 2020), a widely-used Japanese morphological analyzer. Annotators classify the extracted entities into the following information source types:
Database-derived: The entity is based on the external knowledge used in that utterance.
Speaker-derived: The entity is based on the knowledge, experiences, and opinions that the recommender originally has about the recommended movie.
Other: The entity does not fall under the above two types (e.g., greetings).
An annotation example is shown below.
Utterence: The action scenes(datab) spectabular(speaker)! Used knowledge: Gentre, Action
We recruited professional annotators, who are native Japanese speakers, to annotate these information source types. One annotator was assigned to each dialogue. After the annotation, another annotator double-checked the contents.
## 2.2 Result
Table 1 shows the annotation statistics. While JMRD is a knowledge-grounded dialogue corpus and thus inherently contains many database-derived entities, it also contains about 60,000 speakerderived entities. This result verifies that humans 2Examples of dialogue and knowledge in JMRD can be found in Appendix A.1.
| Train | Dev | Test | Total | |
|--------------------|---------|--------|---------|---------|
| # dialogues | 4,575 | 200 | 300 | 5,075 |
| # utterances (R) | 51,080 | 2,244 | 3,347 | 56,671 |
| # entities | 235,771 | 10,320 | 15,734 | 261,825 |
| # database-derived | 166,958 | 7,223 | 10,476 | 184,657 |
| # speaker-derived | 51,170 | 2,303 | 4,095 | 57,568 |
| # other | 17,643 | 794 | 1,163 | 19,600 |
Table 1: Statistics of the information source annotation.
R indicates recommender.
![1_image_0.png](1_image_0.png)
incorporate their own knowledge, experiences, and opinions into their utterances, even in dialogues to convey external knowledge.
## 3 Analysis Of Human Utterances
We analyze human utterances at the dialogue level and utterance level.
## 3.1 Dialogue-Level Analysis
$\text{Initialize}$
4,328 dialogues in JMRD have post-task questionnaires on 5-point Likert scale (5 is the best.) We regard the rating of the question to the seekers (i.e.,
Did you enjoy the dialogue?) as dialogue engagingness and analyze the relationship between this and the ratio of each information source label.
Figure 2 shows that dialogues with high engagingness scores tend to have more speaker-derived entities (or less database-derived) than those with low engagingness scores. When constructing JMRD, recommenders were given a certain amount of external knowledge and asked to use that knowledge to respond. However, recommenders highly rated by their dialogue partners incorporated not only the given external knowledge but also speakerderived information to some extent in their dialogues.
![2_image_0.png](2_image_0.png)
## 3.2 Utterance-Level Analysis
We conduct the utterance-level evaluation via crowdsourcing. We randomly extract 500 responses along with their contexts (= 4 previous utterances) from the test set. For each utterance, workers rate utterance engagingness (i.e., Would you like to talk to the person who made this response?) on a 5-point Likert scale, with 5 being the best. Three workers evaluate each utterance, and the scores are averaged.
The average score for utterances with speakerderived entities was 3.31, while those without speaker-derived entities was 3.07. Student's t-test with p = 0.05 revealed a statistically significant difference between these scores.
Furthermore, Figure 3 shows the relationship between utterance engagingness and the ratio of each information source label. This figure shows that utterances with high scores tend to have more speaker-derived entities. This trend is consistent with that of the dialogue engagingness.
Does subjective knowledge contribute to engagingness? The knowledge type used in JMRD can be divided into subjective knowledge (review) and objective knowledge (title, etc.). Reviews are the opinions of individuals who have watched movies and have similar characteristics to speaker-derived information. We then examine whether there is a difference in engagingness between utterances using subjective and objective knowledge. The average engagingness scores were 3.32 and 3.163, respectively, and Student's t-test with p = 0.05 revealed no statistically significant difference. The 3We exclude utterances referring to both of subjective and objective knowledge from this result.
above analysis demonstrates that information obtained from the speaker's own experience is an important factor in utterance engagingness.
## 4 Analysis Of System Utterances
We investigate the distribution of information source labels in the responses of the model trained on the knowledge-grounded dialogue dataset. First, we train a Response Generator (§4.1) with the dialogue contexts and external knowledge as input and responses as output. Next, an Information Source Classifier (§4.2) is trained with responses and external knowledge as input and information source labels as output. Then, the Information Source Classifier infers the information source labels for the system responses generated by the Response Generator. Finally, we analyze the distribution of inferred information source labels.
## 4.1 Response Generator
We use a BART*large* (Lewis et al., 2020) model as a backbone.4 The input to the model is formed as follows:
$$\begin{array}{c}{{[C L S]u_{t-4}[S E P]u_{t-3}[S E P]u_{t-2}[S E P]}}\\ {{u_{t-1}[S E P][C L S_{K}]k t^{1}[S E P]k c^{1}[S E P]...}}\\ {{\qquad\qquad[C L S_{K}]k t^{M}[S E P]k c^{M}[S E P],}}\end{array}$$
where t is the dialogue turn, utis the t-th response, and ktiand kci(1 <= i <= M) are the knowledge type and knowledge content associated with the target response, respectively (M is the maximum number of knowledge associated with ut.)
[CLSK] is a special token. We feed the gold knowledge into the model to focus on how knowledge is reflected in the responses. The model learns to minimize perplexity in generating ut.
We evaluated the quality of response generation with the SacreBLEU (Post, 2018). BLEU-1/2/3/4 scored high, 81.1/73.5/71.0/69.9. This result is reasonable because the gold knowledge was given.
## 4.2 Information Source Classifier
We fine-tune a RoBERTa*large* (Liu et al., 2019)
model.5 The Information Source Classifier performs a sequence labeling task to estimate BIO6
| engagingness | | |
|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|------|
| ... | | |
| Context | Recommender: This movie is an animation movie released in 2015. Seeker: I see. | |
| Knowledge | {director, Takahiko Kyogoku}, {cast, Emi Nitta}, {cast, Yoshino Nanjo} Human: The director is Takahiko Kyogoku, and the voice actors are Emi Nitta and Yoshino | 4.00 |
| Response | Nanjo. These two are also singers. System: The director is Takahiko Kyogoku. The voice actors are Emi Nitta and Yoshino Nanjo. | 2.33 |
| Prec. | Rec. | F1 | |
|------------------|--------|-------|-------|
| database-derived | 94.92 | 95.61 | 95.27 |
| speaker-derived | 80.88 | 84.39 | 82.60 |
| other | 82.93 | 64.15 | 72.34 |
| micro avg. | 90.52 | 90.48 | 90.50 |
Table 2: An example of the human and system response. The blue and red parts refer to database-derived and speaker-derived information, respectively.
| Dist. (%) | Human (gold) Human (pred) System (pred) | | |
|------------------|-------------------------------------------|-------|-------|
| database-derived | 66.22 | 66.75 | 85.48 |
| speaker-derived | 26.33 | 27.49 | 10.66 |
| other | 7.45 | 5.77 | 3.86 |
Table 3: Results of the sequence labeling by Information Source Classifier.
Table 4: Distributions of information source labels for human and system responses.
labels of the information source. The input to the model is formed as follows:
$$\begin{array}{c}{{[C L S]u_{t}[S E P][C L S_{K}]k t^{1}[S E P]k c^{1}[S E P]...}}\\ {{\qquad\qquad[C L S_{K}]k t^{M}[S E P]k c^{M}[S E P]}}\end{array}\tag{2}$$
Table 3 shows precision, recall, and F1 scores for each label and micro average scores across all labels. The micro average F1 score was 90.50, which is accurate enough for the further analysis.
## 4.3 Analysis For Inferred Labels
The information source labels for system responses are inferred using the classifier trained in Section 4.2. Table 4 shows distributions of information source labels for human and system responses. For a fair comparison, the human responses are also given labels inferred by the classifier (denoted as **Human (pred)**), although they have gold labels (denoted as **Human (gold)**).
Human (gold) and **Human (pred)** have similar distributions, indicating that the accuracy of the classifier is sufficiently high. For **System**
(pred), the percentage of database-derived labels increased significantly (66.75%→85.48%) and that Table 5: Average ratios of speaker-derived labels per knowledge type used.
of speaker-derived information decreased significantly (27.49%→10.66%). This result shows that the response generation model, trained in a standard way, was not able to use speaker-derived information as often as humans do.
Table 2 shows an example of human and system responses along with the engagingness scores. The system was able to reflect given knowledge in the response appropriately but did not incorporate additional speaker-derived information, such as the information two voice actors also work as singers.
For further analysis, we investigated the average ratios of speaker-derived information by knowledge type used. Table 5 shows the result. Significant drops were observed for reviews
(31.42%→6.32%) and plots (13.68%→2.32%).
This is probably because reviews and plots are relatively long and informative external knowledge, so the system judged there was no need to incorporate additional speaker-derived information.
Combined with our observation that speakerderived information improves engagingness, the current model is likely to have lower engagingness due to its inability to effectively incorporate speaker-derived information. Such an ability is hardly learned by simply optimizing a model to reduce the perplexity of response generation, suggesting the need for a novel learning framework.
| Ratio (%) | Human (gold) Human (pred) System (pred) | | |
|---------------|-------------------------------------------|-------|-------|
| Title | 30.21 | 34.12 | 27.09 |
| Released Year | 16.41 | 22.31 | 6.56 |
| Director | 13.94 | 11.96 | 4.50 |
| Cast | 36.11 | 45.34 | 23.45 |
| Genre | 10.47 | 15.14 | 5.49 |
| Review | 27.72 | 31.42 | 6.32 |
| Plot | 13.98 | 13.68 | 2.32 |
| No knowledge | 57.49 | 63.08 | 55.99 |
## 5 Conclusion
We analyzed the distribution of speaker-derived information in human and system responses in the knowledge-grounded dialogue. The analysis showed that the use of speaker-derived information, as well as external knowledge, made responses more engaging. We also confirmed that the response generation model trained in a standard way generated less speaker-derived information than humans.
It is difficult to make good use of speaker-derived information by simply minimizing the perplexity of the model because a wide variety of speakerderived information appears in each dialogue. We hope our published annotated corpus becomes a good launch pad for tackling this issue.
## Acknowledgements
We would like to thank anonymous reviewers for their insightful comments. This work was supported by NII CRIS collaborative research program operated by NII CRIS and LINE Corporation. This work was also supported by JST, CREST Grant Number JPMJCR20D2, Japan and JSPS KAKENHI Grant Number JP22J15317.
## References
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations.
Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, William B. Dolan, Jianfeng Gao, Wen tau Yih, and Michel Galley. 2018. A knowledgegrounded neural conversation model.
Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim.
2020. Sequential latent knowledge selection for knowledge-grounded dialogue. In International Conference on Learning Representations.
Takashi Kodama, Ribeka Tanaka, and Sadao Kurohashi. 2022. Construction of hierarchical structured knowledge-based recommendation dialogue dataset and dialogue system. In Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, pages 83–92, Dublin, Ireland. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence
pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics.
Sha Li, Mahdi Namazifar, Di Jin, Mohit Bansal, Heng Ji, Yang Liu, and Dilek Hakkani-Tur. 2022. Enhancing knowledge selection for grounded dialogues via document semantic graphs. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2810–2823, Seattle, United States. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. abs/1907.11692.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.
Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2322–2332, Brussels, Belgium. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Hannah Rashkin, David Reitter, Gaurav Singh Tomar, and Dipanjan Das. 2021. Increasing faithfulness in knowledge-grounded dialogue with controllable features. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 704–718, Online. Association for Computational Linguistics.
Arseny Tolmachev, Daisuke Kawahara, and Sadao Kurohashi. 2020. Design and structure of the Juman++ morphological analyzer toolkit. Journal of Natural Language Processing, 27(1):89–132.
Haolan Zhan, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Yongjun Bao, and Yanyan Lan. 2021. Augmenting knowledge-grounded conversations with sequential knowledge transition. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5621–5630, Online. Association for Computational Linguistics.
Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, and Rui Yan. 2020. Knowledgegrounded dialogue generation with pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3377–3390, Online. Association for Computational Linguistics.
Kangyan Zhou, Shrimai Prabhumoye, and Alan W
Black. 2018. A dataset for document grounded conversations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 708–713, Brussels, Belgium. Association for Computational Linguistics.
## A Appendices A.1 Example Of Jmrd
Table 6 and 7 show examples of the dialogue and knowledge in JMRD.
## A.2 Implementation Details A.2.1 Response Generator
Dialogue contexts, knowledge (knowledge types and contents), and target responses are truncated to the maximum input length of 256, 256, and 128, respectively. The model is trained for up to 50 epochs with a batch size of 512 and 0.5 gradient clipping. We apply early stopping if no improvement of the loss for the development set is observed for three consecutive epochs. We use AdamW optimizer (Loshchilov and Hutter, 2019)
with β1 = 0.9, β2 = 0.999, = 1e − 8 and an initial learning rate = 1e − 5. We use an inverse square root learning rate scheduler with the first 1,000 steps allocated for warmup. During decoding, we use the beam search with a beam size of 3.
## A.2.2 Information Source Classifier
Target responses and knowledge (knowledge types and contents) are truncated to the maximum input length of 128 and 384, respectively. The model is trained for up to 20 epochs with a batch size of 64 and 0.5 gradient clipping. We apply early stopping if no improvement of the f1 score for the development set is observed for three consecutive epochs. We use AdamW optimizer (Loshchilov and Hutter, 2019) with β1 = 0.9, β2 = 0.999,
= 1e−8 and an initial learning rate = 1e−5. We use an inverse square root learning rate scheduler with the first 1,000 steps allocated for warmup.
| Turn | Dialogue | Knowledge type | Knowledge content | | |
|----------|------------------------------------------------------------------------------------------------------------------|------------------|---------------------|--------|----|
| R1 | Hello. | No knowledge | - | | |
| S1 | Hello. Nice to meet you! | | | | |
| R2 | Do you know "Avengers: Endgame"? | Title | Avengers: Endgame | | |
| S2 | I have only heard of the title... | | | | |
| R3 | This movie was released in 2019. | Released Year | 2019 | | |
| S3 | Got it. Is it an American movie? | | | | |
| R4 | Yes, It's an American action movie. | Genre | Action | | |
| S4 | What are some of the highlights? | | | | |
| R5 | The highlight is when the heroes gather to confront Thanos, who is an alien | Review | Heroes | gather | to |
| villain! | confront Thanos | | | | |
| S5 | I see! Is this a story of battles in space? | | | | |
| R6 | No, it takes place on Earth. | No knowledge | - | | |
| S6 | Then, the villain will attack the earth... | | | | |
| R7 | Yes, there are some scary moments. | No knowledge | - | | |
| S7 | Is it scary...? I don't really like horror movies, but I like action ones. Would I be able to enjoy watching it? | | | | |
| R8 | It is not scary like horror movies, so I think you will enjoy watching it! | No knowledge | - | | |
| S8 | Good! The fight between Thanos and the heroes sounds exciting! | | | | |
| R9 | Please watch it! | No knowledge | - | | |
| S9 | Yes! I'll have a chance to go to the video store soon and rent "Avengers: Endgame"! | | | | |
| R10 | Thank you! | No knowledge | - | | |
| S10 | Thank you, too, for this valuable information! | | | | |
Table 6: A full dialogue example in JMRD. R and S in Turn column denote recommender and seeker, respectively.
Subscript numbers indicate the number of turns in the dialogue. "No knowledge" means that the recommender did not use the given knowledge information.
| Knowledge type | Knowledge content | |
|-------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|
| Title | Avengers: Endgame | |
| Released Year | 2019 | |
| Director | name | Anthony Russo, Joe Russo |
| description | Director, producer, screenwriter, actor, and editor for television and film in the United States. | |
| cast1 name | Robert Downey Jr. | |
| cast1 description | an American actor, voice actor, musician, and producer. | |
| Cast | cast2 name | Chris Evans |
| cast2 description | an American actor. He was born in Sudbury, Massachusetts. | |
| Genre | Action, Adventure | |
| Review | 5 sentences, such as "Heroes gather to confront Samus." | |
| Plot | 10 sentences, such as "In 2018, three weeks after half of all life in the entire universe was erased by decimation (genocide using the power of the Infinity Stone) by Thanos the Titan." | |
Table 7: An example of knowledge used in JMRD. The director and the casts have two attributes: name and description, respectively. |
sato-etal-2023-choosing | Choosing What to Mask: More Informed Masking for Multimodal Machine Translation | https://aclanthology.org/2023.acl-srw.35 | Pre-trained language models have achieved remarkable results on several NLP tasks. Most of them adopt masked language modeling to learn representations by randomly masking tokens and predicting them based on their context. However, this random selection of tokens to be masked is inefficient to learn some language patterns as it may not consider linguistic information that can be helpful for many NLP tasks, such as multimodal machine translation (MMT). Hence, we propose three novel masking strategies for cross-lingual visual pre-training - more informed visual masking, more informed textual masking, and more informed visual and textual masking - each one focusing on learning different linguistic patterns. We apply them to Vision Translation Language Modelling for video subtitles (Sato et al., 2022) and conduct extensive experiments on the Portuguese-English MMT task. The results show that our masking approaches yield significant improvements over the original random masking strategy for downstream MMT performance. Our models outperform the MMT baseline and we achieve state-of-the-art accuracy (52.70 in terms of BLEU score) on the How2 dataset, indicating that more informed masking helps in acquiring an understanding of specific language structures and has great potential for language understanding. | # Choosing What To Mask: More Informed Masking For Multimodal Machine Translation
Júlia Sato∗, Helena Caseli∗**, Lucia Specia**†
∗Federal University of São Carlos (UFSCar), São Carlos, Brazil
†Imperial College London, London, United Kingdom [email protected] [email protected] [email protected]
## Abstract
Pre-trained language models have achieved remarkable results on several NLP tasks. Most of them adopt masked language modeling to learn representations by randomly masking tokens and predicting them based on their context. However, this random selection of tokens to be masked is inefficient to learn some language patterns as it may not consider linguistic information that can be helpful for many NLP tasks, such as multimodal machine translation (MMT). Hence, we propose three novel masking strategies for cross-lingual visual pre-training - more informed visual masking, more informed textual masking, and more informed visual and textual masking - each one focusing on learning different linguistic patterns. We apply them to Vision Translation Language Modelling for video subtitles
(Sato et al., 2022) and conduct extensive experiments on the Portuguese-English MMT task.
The results show that our masking approaches yield significant improvements over the original random masking strategy for downstream MMT performance. Our models outperform the MMT baseline and we achieve state-of-theart accuracy (52.70 in terms of BLEU score)
on the How2 dataset, indicating that more informed masking helps in acquiring an understanding of specific language structures and has great potential for language understanding1.
## 1 Introduction
Pre-trained language models have achieved remarkable results on several Natural Language Processing (NLP) tasks (Devlin et al., 2019; Liu et al.,
2019; Baevski et al., 2019; Yang et al., 2019; Joshi et al., 2020; Clark et al., 2020; Lan et al., 2020; Zhuang et al., 2021). One of these tasks is multimodal machine translation (MMT), which has attracted considerable attention from both Computer Vision and NLP communities as it not only considers text information but also uses other modal information - mostly visual information - to improve translation outputs (Specia et al., 2016; Elliott et al., 2017; Barrault et al., 2018). Recent advances in this field have achieved significant success and highlighted the efficiency of both multimodal and multilingual pre-training for MMT (Caglayan et al.,
2021; Sato et al., 2022).
Nonetheless, most pre-trained models follow BERT's pre-training paradigm (Devlin et al., 2019)
and adopt masked language modeling (MLM) and its variants to learn representations by masking tokens and making predictions based on their context.
The conventional MLM relies on randomly selecting tokens to be masked and therefore may not consider linguistic information that can be helpful for some NLP tasks, such as MMT.
In this paper, we address this problem through a systematic study of new masking approaches for cross-lingual visual pre-training. We propose more informed masking strategies to learn particular language patterns for downstream multimodal machine translation performance. These strategies consist of selectively masking linguistic and visual tokens instead of randomly masking them, focusing on situations that can be favored by a better understanding of specific visual or textual information.
For instance, since most pre-trained language models are based on English, they fail to understand some linguistic patterns that are common in many other languages, such as the grammatical gender of words. The English language treats the grammatical gender of words differently from languages such as French, Spanish, Portuguese, or Italian. While some languages have different words with the same meaning that are found in the feminine and masculine forms, this does not happen in the English language. For example, considering the English-Portuguese translation, the pronoun
"they" can be translated to "elas" (feminine) or
"eles" (masculine). Another example is the adjective "beautiful", which can be translated to "bonita"
(feminine) or "bonito" (masculine) depending on who or what it is referring to.
In this context, we propose three selective masking strategies - more informed visual masking, more informed textual masking, and more informed visual and textual masking - each one focusing on masking specific linguistic and visual tokens that can contribute to better understanding some of these different linguistic patterns. We apply them to Vision Translation Language Modelling for video subtitles (Sato et al., 2022) and run an extensive set of experiments on the Portuguese-English MMT
task.
We find that predicting particular masked elements can be a powerful objective for crosslingual visual pre-training as the pre-trained model can acquire a better understanding of specific language structures. Experimental results show that our masking approaches yield significant improvements over the original random masking strategy for downstream MMT performance. Our models outperform the MMT baseline and achieve state-ofthe-art accuracy (52.70 in terms of BLEU score) on the How2 dataset (Sanabria et al., 2018), indicating that more informed masking helps in capturing domain-specific language patterns and has great potential for language understanding.
## 2 Method
In this section, we present the detailed implementation of three masking strategies: more informed visual masking (Section 2.2.1), more informed textual masking (Section 2.2.2), and more informed visual and textual masking (Section 2.2.3), as well as the VTLM for video subtitles pre-training objective in Section 2.1.
## 2.1 Visual Translation Language Modelling For Video Subtitles
The VTLM objective (Caglayan et al., 2021) joins the translation language modelling (TLM) (Conneau and Lample, 2019), which employs the masked language modelling objective, with masked region classification (MRC) (Chen et al., 2020; Su et al., 2020) to generate cross-lingual and multimodal representations. VTLM defines the input x as the concatenation of m-length source language sentence s
(1)
1:m, n-length target language sentence s
(2)
1:n
, and {v1, · · · , vo} corresponding image features:
$$x=[s_{1}^{(1)},\cdots,s_{m}^{(1)},s_{1}^{(2)},\cdots,s_{n}^{(2)},v_{1},\cdots,v_{o}]$$
The final model combines the TLM loss with the MRC loss according to the following equation:
$${\mathcal{L}}={\frac{1}{|X|}}\,\sum_{x\in{\mathcal{X}}}\log P r(\{{\hat{y}},{\hat{v}}\}|{\hat{x}};\theta)$$
where x˜ is the masked input sequence, yˆ denotes the ground-truth targets for masked positions, vˆ
represents the detection labels and θ denotes the model parameters.
VTLM for video subtitles (Sato et al., 2022)
corresponds to VTLM adapted to the Brazilian Portuguese-English language pair and to more challenging circumstances regarding the image-text relationship. Its pre-training has visual and crosslingual resources and performs MLM and MRC on a three-way parallel multimodal and multilingual corpus, How2 (Sanabria et al., 2018).
Masking. VTLM selects a random set of linguistic and visual input tokens for masking. The masking proportion is 15% and it is applied separately to visual and language flows. For textual masking, 80% of the 15% chosen tokens are replaced with the [MASK] token, 10% are replaced with random tokens from the vocabulary, and 10% are kept unchanged. And visual masking follows a similar approach: VTLM replaces its vector of projected features by the [MASK] token embedding, with 10%
of the masking being equivalent to using region features randomly selected from all images in the batch, and the remaining 10% of the regions are left intact.
## 2.2 More Informed Masking Strategies
Unlike the original approach, we do not randomly select tokens for masking. Instead, we focus on masking specific tokens in order to learn particular language patterns efficiently. Thus, we propose three new masking strategies that explore more informed ways of masking linguistic and visual tokens.
These approaches are based on the hypothesis that by performing more informed masking (e.g.,
masking tokens that reveal the grammatical gender of words) the model could come to a better understanding of these concepts, obtaining better performance in the translation of pronouns and words assigned as masculine, feminine, or neuter.
![2_image_0.png](2_image_0.png)
The overall architecture of the model is depicted in Figure 1.
## 2.2.1 More Informed Visual Masking
This approach consists of changing the visual masking so that the initial selection of tokens for masking is no longer random, and a greater proportion of tokens related to elements categorized as *people* are selected for masking, such as objects in the image categorized as "man", "woman", "boy", or
"girl". For convenience, we denote these tokens as TPeople.
To accomplish this, we changed the visual masking stage to retrieve detection information necessary to perform the identification of class labels during training. Specifically, we used object features that were previously extracted using the Faster R-CNN model (Ren et al., 2015) pre-trained on the Open Images Dataset V4 (Kuznetsova et al., 2020)
to retrieve the information needed to identify the categories of visual tokens during training.
At the beginning of the visual masking stage, we obtain the category index from the label map of the Open Images Dataset, as well as the variables containing the class predictions and confidence scores for each image from the batch. We then identify the index associated with each image and the position of each visual token in relation to the set of images from the batch. As a result, we are able to obtain the class label and confidence score for each token candidate to be masked and selectively choose the tokens that will be masked.
We apply this strategy to increase the proportion of TPeople among masked tokens, with a percentage of 33.34%, 50.0%, and 66.67%. In all cases, the remaining candidate tokens for making do not have the same category as TPeople and are randomly chosen. We maintained the visual masking ratio:
15% of inputs are selected for masking, from which 80% are replaced with the [MASK] token, 10% are replaced with random tokens, and 10% are left intact.
## 2.2.2 More Informed Textual Masking
Similar to the previous approach, this masking strategy aims to mask a greater amount of tokens that reveal the grammatical gender of words in a given sentence. Thus, the initial selection of tokens for masking was changed to no longer be random and to favor more pronouns - such as "he"/"she",
"him"/"her", and "his"/"hers" - among the tokens that will be masked, maintaining the 15% textual masking ratio. For convenience, we denote these tokens as TPronouns.
As VTLM stores the input textual stream as integer-type *Tensors*, we changed the VTLM architecture to convert this numerical stream to words at the beginning of the textual masking stage and then ascertain each sentence from the batch to identify subject pronouns, object pronouns, and possessive adjectives and pronouns. After identifying these words, they are marked and associated with their original numerical form so that they can be identified later in the selection of tokens for masking.
At this stage, TPronouns are identified and tokens are selectively chosen to be masked, with a higher proportion of TPronouns being masked.
We performed three experiments with the following percentages of TPronouns: 33.34%, 50.0%,
and 66.67%. In all cases, the remaining masked tokens did not have the same category as TPronouns and were randomly chosen following the standard approach.
Model TPeople**Test Valid**
BLEU METEOR BLEU METEOR
VTLM: random masking 51.80 78.04 52.44 78.25
VTLM: more informed visual masking
33.34% **52.70 79.63 53.25 79.83**
50.00% 51.92 79.10 52.51 79.41 66.67% 51.65 78.64 52.26 79.09
Table 1: BLEU and METEOR scores for random masking VTLM (baseline) and more informed visual masking VTLM (our model) for the MMT task.
## 2.2.3 More Informed Visual And Textual Masking
The more informed visual and textual masking strategy is a combination of the two previous approaches, i.e., we mask a greater proportion of TPeople tokens at the visual masking stage, as well as TPronouns tokens at the textual masking stage.
This approach aimed to analyze the model behavior when applying more informed visual masking and more informed textual masking simultaneously.
## 3 Experiments
Pre-training data. We use the How2 corpus
(Sanabria et al., 2018) in all stages of experimentation. How2 is a multimodal and multilingual collection of approximately 80,000 instructional videos accompanied by English subtitles and around 300 hours of collected crowdsourced Portuguese translations. For pre-training, we used a set from the How2 corpus that contains 155k features and their corresponding text in English and Portuguese2. We applied Moses tokenization3and used byte pair encoding (Sennrich et al., 2016) to split words into subword units.
Pre-training. We followed Caglayan et al.'s (2021)
work to conduct the experiments. We set the model dimension to 512, the feed-forward layer dimension to 2048, the number of layers to 6 and the number of attention heads to 8. We randomly initialize model parameters rather than using pre-trained LM checkpoints. We use Adam (Kingma and Ba, 2014) with the mini-batch size set to 32 and the learning rate set to 0.0001. We set the dropout (Srivastava et al., 2014) rate to 0.1 in all layers. The pre-training was conducted on a single NVIDIA
GeForce GTX 1070 GPU for 1.5M steps, and best 2The dataset used in this work is publicly available under the Creative Commons BY-SA 4.0 License and BSD-2-Clause License.
3https://github.com/moses-smt/mosesdecoder checkpoints were selected with respect to validation set accuracy.
Fine-tuning. The encoder and the decoder of Transformer-based (Vaswani et al., 2017) MMT
models are initialized with weights from VTLM,
and fine-tuned with a smaller learning rate. We use the same hyperparameters as the pre-training phase, but we follow Sato et al.'s (2022) work and decrease the batch size to 16 and the learning rate to 1e-5. For evaluation, we use the models with the lowest validation set perplexity to decode translations with beam size of 8.
Evaluation Metrics. We report the automatic evaluation using BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005). We also conduct qualitative analyzes to better show the effects of the proposed masking strategies.
## 4 Results
The trained models were evaluated on valid and test sets of How2 for the multimodal machine translation (MMT) task. We compare our models with the original VTLM for video subtitles model (Sato et al., 2022), which has the same architecture but uses the popular random masking strategy instead of ours.
## 4.1 More Informed Visual Masking
Table 1 shows BLEU and METEOR scores across valid and test sets of How2. The results show that this new masking strategy affects the final performance of the model. For TPeople = 33.34%, our model achieved 52.70 BLEU and 79.63 METEOR
on the test set and 53.25 BLEU and 79.83 METEOR on the valid set for the MMT task, outperforming the baseline by approximately 1 BLEU
and 1.6 METEOR. When TPeople = 50.0%, our model also outperformed the baseline in terms of both BLEU and METEOR, but its performance was slightly inferior to the performance of the first experiment. Finally, when TPeople = 66.67%, the
| Source: | Então ele ou ela não carrega todo o peso do SCBA, na área do ombro ou região ao redor do pescoço. |
|------------|------------------------------------------------------------------------------------------------------------------------------------|
| Reference: | So he or she is not carrying all the weight of the SCBA, in the shoulder area, or region around the neck. |
| Baseline: | So it or it doesn't carry all the weight of the SCBA, in the shoulder area, or region around the neck. |
| Our model: | So he or she won't carry all the weight of the SCBA, in the shoulder area, or region around the neck. |
| Source: | Então, há algumas maneiras diferentes de levá-lo pra fora. |
| Reference: | So there's a couple of different ways to take him out. |
| Baseline: | So there's a couple of different ways to take it out. |
| Our model: | So there's a couple of different ways to get him out. |
| Source: | E nós vamos fazer isso em seu cabelo hoje. |
| Reference: | And we're going to be cornrowing that into her hair today. |
| Baseline: | And we're going to do that on your hair today. |
| Our model: | And we're going to do that on her hair today. |
| Source: | Ela pegará neve e a empurrará para o lado da estrada ou ela pegará a sujeira de um ponto alto e a moverá para o lado. |
| Reference: | It will catch snow and push it over to the side of the road or it will catch dirt out of a high spot and move it over to the side. |
| Baseline: | She will take snow and push it to the side of the road or she will take the dirt from a high point and move it to the side. |
| Our model: | It will take snow and push it to the side of the road or it will take the dirt from a high spot and move it to the side. |
![4_image_0.png](4_image_0.png)
performance of our model was superior to the baseline by approximately 0.7 METEOR. However, in terms of BLEU, the performance was inferior to the baseline by approximately 0.16 BLEU, presenting a behavior different from that observed in the last two experiments.
Therefore, the results indicate that more informed visual masking benefits the final performance of the model to a certain extent. By increasing the proportion of TPeople tokens being masked, there is an improvement in the performance of the model compared to the baseline. Nevertheless, when this proportion becomes greater than 50%,
this improvement tends to decrease. This behavior may be explained by the decrease in tokens related to other categories being masked since the visual masking ratio did not change, i.e., it remained at 15%. Thus, excessively increasing the proportion of TPeople tokens being masked can jeopardize the learning of elements from other categories.
Qualitative Analysis. To better understand the effect of our proposed pre-training masking approach, we compare some examples of texts translated by random masking VTLM (baseline) and more informed visual masking VTLM (our model). The examples are presented in Table 2. In the first example, the baseline mistranslates the subject pronouns
"he" and "she", translating both to "it", while our model translates them correctly, achieving better performance. In the second example, the baseline mistranslates the object pronoun "him", translating it to "it", while our model translates it correctly.
The third example illustrates the correct translation of the possessive adjective "her" by our model, while the baseline mistranslates it to "your". Finally, the baseline references an object using the subject pronoun "she" instead of "it". In contrast, our model does not make the same mistake and uses the pronoun correctly.
## 4.2 More Informed Textual Masking
We run the same experiment using three different ratios of TPronouns - 33.34%, 50.0%, and 66.67% –
and the results are shown in Table 3. The results show that this masking strategy also affects the final performance of the model. For TPronouns =
33.34%, our model scored 52.64 BLEU and 79.45 METEOR on the test set and 52.96 BLEU and 79.53 METEOR on the valid set, outperforming the baseline by approximately 0.7 BLEU and 1.3
| Model | TPronouns | Test | Valid | | |
|-------------------------------------|-------------|--------|---------|-------|-------|
| BLEU | METEOR | BLEU | METEOR | | |
| VTLM: random masking | 51.80 | 78.04 | 52.44 | 78.25 | |
| 33.34% | 52.64 | 79.45 | 52.96 | 79.53 | |
| VTLM: more informed textual masking | 50.00% | 52.39 | 79.35 | 52.94 | 79.51 |
| 66.67% | 52.21 | 79.27 | 52.82 | 79.42 | |
Table 3: BLEU and METEOR scores for random masking VTLM (baseline) and more informed textual masking
![5_image_0.png](5_image_0.png)
VTLM (our model) for the MMT task.
| Source: | Se você andar seu cachorro do seu lado esquerdo, você quer que ele se sente do lado, porque o que ele faz é apertar, então, se você estiver por aqui, o cachorro deveria tê-lo aqui. |
|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Reference: | If you walk your dog on your left side you want it to sit on the side because what it does is tighten up so if you're over here the dog should have it over here. |
| Baseline: | If you walk your dog on your left side you want him to sit on the side because what he does is squeeze, then if you're standing over here the dog should have him here. |
| Our model: | If you walk your dog on your left side you want it to sit on the side because what it does is tighten, then if you're over here the dog should have it here. |
| Source: | Ela entra em cena depois que a cena começa entre o policial e Stanley. |
| Reference: | She walks into the scene after the scene begins between the police officer and Stanley. |
| Baseline: | It goes into scene after the scene starts between the police officer and Stanley. |
| Our model: | She goes into scene after the scene starts between the police officer and Stanley. |
| Source: | E eu só trabalhei uma noite com ela. |
| Reference: | And I only worked one night with her. |
| Baseline: | And I just worked a night with it. |
| Our model: | And I just worked a night with her. |
| Source: | Mas, eu vou tentar de qualquer maneira e você pode ter uma ideia do que você pode querer fazer. |
| Reference: | But, I'm going to try it anyway and you can get an idea of what you might want to do. |
| Baseline: | But, I'm going to try anyway and you might have an idea of what you might want to do. |
| Our model: | But, I'm going to try it anyway and you might get an idea of what you might want to do. |
METEOR. As for TPronouns = 50.0%, our model also surpassed the baseline, but its performance was worse than in the previous experiment. Finally, for TPronouns = 66.67%, our model performed better than the baseline in terms of BLEU and METEOR, but its performance was inferior than in the last two experiments, when the chosen proportions were 33.34% and 50.0%.
Therefore, the results indicate that masking more TPronouns tokens leads to an improvement in the final performance of the model. However, even though our model surpassed the baseline in all experiments, this performance improvement is limited, as the best performance was observed when TPronouns proportion was 33.34%, followed by 50.0% and 66.67%, respectively.
Qualitative Analysis. Some examples of texts translated by each model are presented in Table 4.
In the first example, the random masking VTLM
uses the pronouns "he" and "him" to refer to the word "dog" instead of using the pronoun "it",
which should have been used in this case. On the other hand, our model does not make the same mistake and uses the correct pronoun in all cases, achieving better translation performance. In the second example, the random masking VTLM mistranslates the subject pronoun "she" and translates it to "it", which is a serious translation error since the pronoun "it" cannot be used to refer to a person.
In contrast, our model uses the correct pronoun and achieves better performance. The next example illustrates the incorrect translation of the object pronoun "her" by the baseline, which again uses the pronoun "it" to refer to a person. However, this error is not made by our model, which makes the correct use of the pronoun in the translation.
The three previous examples illustrate situations similar to those observed with the application of more informed visual masking. However, the last example shows a further improvement in translation. This improvement is related to the use of the pronoun "it" as the direct object of a verb. While the baseline omits this pronoun in the translation, our model correctly uses it after the verb "try".
## 4.3 **More Informed Visual And Textual Masking**
Table 5 shows BLEU and METEOR scores across valid and test sets of How2. The obtained results show that the more informed visual and textual masking strategy also affects the performance of the MMT model. Our model achieved 52.34 BLEU
and 78.77 METEOR on the test set and 53.28 BLEU and 79.44 METEOR on the valid set, outperforming the baseline by approximately 0.7 BLEU
and 0.9 METEOR.
Although the performance improvement was not very high in terms of BLEU and METEOR, the results indicate that applying more informed visual and textual masking benefits the final performance of the model.
Qualitative Analysis. To further understand the effectiveness of our approach, we compared some examples of texts translated by random masking VTLM (baseline) and more informed visual and textual masking VTLM (our model). The examples are presented in Table 6. In the first example, the random masking VTLM references the word "website" using the subject pronoun "he" instead of the pronoun "it". In contrast, our model does not make the same mistake and uses this pronoun correctly.
In the second example, the object pronoun "him" is used incorrectly by the baseline. In this case, the pronoun "it" should have been used and our model makes the correct use of this pronoun. The third case illustrates the correct translation of the possessive adjective "your" by our model, while the baseline mistranslates it to "their". In the fourth example, our model correctly uses the pronoun "it" as the direct object of the verb "take", while the baseline omits this pronoun in the translation.
Finally, the last situation illustrates a new improvement not seen when applying more informed visual masking or more informed textual masking separately. Although visual information improves the overall performance of the standard multimodal model, we observed that it can lead to the incorrect use of certain pronouns. For instance, when the video frame associated with the text has an element categorized as "man", the pronouns used in the translation tend to be "he" or "him". Likewise, when there is an element categorized as "woman" in the video frame, the pronouns tend to be "she" or "her". On the other hand, our more informed masking approach tends to better deal with this bad tendency of multimodal models. In the last example, the two elements categorized as "man" in the image possibly influenced the incorrect choice of the pronoun "him" after the verb "bring" by the baseline model. However, our model did not make the same mistake and used the pronoun "it" correctly.
## 5 Related Work
Pre-trained language models have become essential in the natural language processing field. One pre-trained model that has attracted considerable attention in this field is BERT (Devlin et al.,
2019). BERT introduces masked language modeling (MLM) to efficiently learn bidirectional representations by masking a set of input tokens at random and predicting them afterward. In this approach, 15% of input tokens are randomly selected for masking, from which 80% are replaced with the [MASK] token, 10% are replaced with a random token, and 10% are left intact.
Following BERT, several approaches have been proposed to optimize pre-trained language models. Devlin et al. (2019) later propose whole word masking (wwm) in an attempt to address the drawbacks of random token masking in the MLM task.
In this approach, input tokens are segmented into units corresponding to whole words, and instead of selecting tokens to mask at random, they mask all of the tokens corresponding to a whole word at once. Zhang et al. (2019) introduce ERNIE to optimize the masking process of BERT by applying entity/phrase masking. Instead of randomly selecting input words, phrase-level masking masks consecutive words and entity-level masking masks the named entities. Clark et al. (2020) present ELECTRA, which uses a generator-discriminator framework. While the generator learns to predict the original words of the masked tokens, the discriminator uses Replaced Token Detection to discriminate whether the input token is replaced by the generator. Levine et al. (2021) propose a principled masking strategy based on the concept of Pointwise Mutual Information (PMI). PMI-masking jointly
Model **Test Valid**
![7_image_0.png](7_image_0.png)
VTLM: random masking 51.80 78.04 52.44 78.25 VTLM: more informed visual and textual masking **52.34 78.77 53.28 79.44**
Table 5: BLEU and METEOR scores for random masking VTLM (baseline) and more informed visual and textual
![7_image_1.png](7_image_1.png)
masking VTLM (our model) for the MMT task.
| Our model: | That's what gives my website to the color options that it has. |
|--------------|-------------------------------------------------------------------------------|
| Source: | E eu vou empurrá-lo de volta. |
| Reference: | And I'm going to push it back down. |
| Baseline: | And I'm going to push him back. |
| Our model: | And I'm going to push it back down. |
| Source: | Eles mantêm seus dedos juntos e são bons para muitas atividades. |
| Reference: | They keep your fingers kind of together and are good for a lot of activities. |
| Baseline: | They keep their fingers together and they're good for many activities. |
| Our model: | They keep your fingers together and they're good for a lot of activities. |
| Source: | Agora pegue, coloque a marca do oleiro lá. |
| Reference: | Now take it, put the potter's mark in there. |
| Baseline: | Now take, put your potter's mark on there. |
| Our model: | Now take it, put the potter's mark in there. |
| Source: | Ele quer trazê-lo de volta naturalmente. |
| Reference: | He wants to bring it back naturally. |
| Baseline: | He wants to bring him back naturally. |
| Our model: | He wants to bring it back naturally. |
![7_image_2.png](7_image_2.png)
masks a token n-gram if it exhibits high collocation over the corpus.
Combining cross-lingual and visual pre-training, Caglayan et al. (2021) propose Visual Translation Language Modelling (VTLM), which extends the TLM framework (Conneau and Lample, 2019) with regional features and performs masked language modeling and masked region classification on a three-way parallel language and vision dataset. The standard masking ratio is maintained (i.e. 15%) and it is applied separately to visual and language flows.
VTLM achieved a 44.0 BLEU and 61.3 METEOR
on the English-German 2016 test set of Multi30k
(Elliott et al., 2016) for the MMT task. Following this approach, Sato et al. (2022) propose VTLM for video subtitles, which extends VTLM to a new language pair and to more challenging circumstances concerning the image-text relationship by using video frames with subtitles instead of images with their corresponding description. They use the same random masking approach for both visual and textual masking and achieved a 51.8 BLEU and 78.0 METEOR on the Portuguese-English test set of How2 (Sanabria et al., 2018) for the MMT task. In this paper, we propose three novel masking strategies for cross-lingual visual pre-training and we apply them to VTLM for video subtitles to test their efficacy for downstream MMT performance.
## 6 Conclusions
In this work, we show that predicting particular masked elements can benefit cross-lingual visual pre-training as the pre-trained model can acquire a better understanding of specific language structures, which improves downstream tasks such as multimodal machine translation. We present three selective masking strategies that focus on masking specific linguistic and visual tokens that can contribute to understanding some language patterns.
We achieve state-of-the-art accuracy on the How2 dataset and show that our masking approaches yield significant improvements over the original random masking strategy for downstream MMT performance. Even though we only conduct experiments on the MMT task using VTLM as the base model, our method can easily generalize to other models and other NLP tasks. We hope that our work here will further accelerate future research on Brazilian Portuguese and other low-resource languages.
For future work, we will investigate the impact of visual and textual masking probability and further explore more effective masking approaches for downstream MMT performance.
## Limitations
Although our research led to improvements in the translation of subject pronouns, object pronouns, and possessive adjectives and pronouns, these improvements did not cover non-binary-associated pronouns, such as they/them/theirs, *xe/xem/xyr* and *ze/hir/hirs*. The large underrepresentation of non-binary genders in textual and visual data contributes to propagating the misrepresentation of non-binary people by language models. In this paper, we were unable to work against this issue, thus we hope to contribute to a fairer representation of these disadvantaged groups in the future.
## Ethics Statement
We acknowledge that all co-authors of this paper are aware of the *ACM Code of Ethics* and honor the code of conduct. We collected our data from a public dataset that permits academic use. As our experiments are limited to the binary linguistic forms represented in the used data, we cannot guarantee that our models will always generate unbiased content.
## References
Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze-driven pretraining of self-attention networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5360–5369, Hong Kong, China. Association for Computational Linguistics.
Satanjeev Banerjee and Alon Lavie. 2005. METEOR:
An automatic metric for MT evaluation with im-
proved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics.
Loïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. 2018. Findings of the third shared task on multimodal machine translation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 304–323, Belgium, Brussels. Association for Computational Linguistics.
Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Madhyastha, Erkut Erdem, Aykut Erdem, and Lucia Specia. 2021. Cross-lingual Visual Pretraining for Multimodal Machine Translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Short Papers, online. Association for Computational Linguistics.
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. UNITER: Universal image-text representation learning.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In *ICLR*.
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32.
Curran Associates, Inc.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Desmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine translation and multilingual image description. In *Proceedings of the Second Conference on Machine Translation*, pages 215–233, Copenhagen, Denmark. Association for Computational Linguistics.
Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30K: Multilingual EnglishGerman image descriptions. In *Proceedings of the* 5th Workshop on Vision and Language, pages 70–
74, Berlin, Germany. Association for Computational Linguistics.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77.
Diederik Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. *International* Conference on Learning Representations.
Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, and et al. 2020. The open images dataset v4. *International Journal of Computer Vision*, 128(7):1956–1981.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Yoav Levine, Barak Lenz, Opher Lieber, Omri Abend, Kevin Leyton-Brown, Moshe Tennenholtz, and Yoav Shoham. 2021. {PMI}-masking: Principled masking of correlated spans. In *International Conference on* Learning Representations.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In *Advances in Neural Information Processing Systems*,
volume 28. Curran Associates, Inc.
Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Loic Barrault, Lucia Specia, and Florian Metze. 2018. How2: A large-scale dataset for multimodal language understanding. In Visually Grounded Interaction and Language (ViGIL),
Montreal; Canada, December 2018. Neural Information Processing Society (NeurIPS)., arXiv. arxiv.org.
32nd Annual Conference on Neural Information Processing Systems, NeurIPS ; Conference date: 02-122018 Through 08-12-2018.
Júlia Sato, Helena Caseli, and Lucia Specia. 2022.
Multilingual and multimodal learning for Brazilian Portuguese. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages
919–927, Marseille, France. European Language Resources Association.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
Lucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In *Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers*,
pages 543–553, Berlin, Germany. Association for Computational Linguistics.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. *Journal of Machine Learning Research*, 15(56):1929–1958.
Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: Pretraining of generic visual-linguistic representations.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. In *Neural Information Processing Systems*.
Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441–1451, Florence, Italy. Association for Computational Linguistics.
Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun. 2021. A
robustly optimized BERT pre-training approach with post-training. In Proceedings of the 20th Chinese National Conference on Computational Linguistics, pages 1218–1227, Huhhot, China. Chinese Information Processing Society of China. |
shen-silberer-2023-combining | Combining Tradition with Modernness: Exploring Event Representations in Vision-and-Language Models for Visual Goal-Step Inference | https://aclanthology.org/2023.acl-srw.36 | Procedural knowledge understanding (PKU) underlies the ability to infer goal-step relations. The task of Visual Goal{--}Step Inference addresses this ability in the multimodal domain. It requires to identify images that represent the steps towards achieving a textually expressed goal. The best existing methods encode texts and images either with independent encoders, or with object-level multimodal encoders using blackbox transformers. This stands in contrast to early, linguistically inspired methods for event representations, which focus on capturing the most crucial information, namely actions and the participants, to learn stereotypical event sequences and hence procedural knowledge. In this work, we study various methods and their effects on PKU of injecting the early shallow event representations to nowadays multimodal deep learning-based models. We find that the early, linguistically inspired methods for representing event knowledge does contribute to understand procedures in combination with modern vision-and-language models. In the future, we are going to explore more complex structure of events and study how to exploit it on top of large language models. | # Combining Tradition With Modernness: Exploring Event Representations In Vision-And-Language Models For Visual Goal–Step Inference
Chong Shen and **Carina Silberer**
Institute for Natural Language Processing, University of Stuttgart, Germany
{chong.shen,carina.silberer}
@ims.uni-stuttgart.de
## Abstract
Procedural knowledge understanding underlies the ability to infer goal–step relations. The task of Visual Goal–Step Inference addresses this ability in the multimodal domain. It requires the identification of images that depict the steps necessary to accomplish a textually expressed goal. The best existing methods encode texts and images either with independent encoders, or with object-level multimodal encoders using blackbox transformers. This stands in contrast to early, linguistically inspired methods for event representations, which focus on capturing the most crucial information, namely actions and participants, to learn stereotypical event sequences and hence procedural knowledge. In this work, we study various methods and their effects on procedural knowledge understanding of injecting the early shallow event representations to nowadays multimodal deep learning-based models. We find that the early, linguistically inspired methods for representing event knowledge do contribute to understand procedures in combination with modern visionand-language models. This supports further exploration of more complex event structures in combination with large language models.1
## 1 Introduction
Procedural Knowledge Understanding (PKU) implies reasoning about how to complete a task or achieve a goal (Mujtaba and Mahapatra, 2019).
While previous works focus on plain texts (Yang and Nyberg, 2015; Zhou et al., 2019; Zhang et al.,
2020a,b; Lyu et al., 2021; Sun et al., 2022), recent studies extend the task to the visual–linguistic domain. They ground procedural everyday tasks in the visual world, as a step towards situated procedural understanding in the real world.
Yang et al. (2021) propose a novel PKU task that utilizes both textual and visual information by selecting an image conditioned on a sentence which 1The code is available at https://github.com/
st143575/Exploring-Event-In-VGSI.
![0_image_0.png](0_image_0.png)
describes a high-level goal (illustrated in Figure 1, cf. Section 2.2). Their experimental results show that there is still a large gap to human performance on this task. While Yang et al. (2021) represent goal descriptions by their neural embeddings, earlier approaches to representing procedural knowledge or stereotypical event sequences (i.e., goals and steps; cf. scripts, Shank and Abelson, 1977), in contrast, focus on capturing the most essential information of events, namely the actions and their main participants (Balasubramanian et al., 2013; Pichotta and Mooney, 2014, inter alia).
In this work, we explore different ways to inject these linguistically inspired representations to the recent powerful deep learning approaches, and study their contribution to multimodal PKU. Specifically, we investigate the relational event representation (Balasubramanian et al., 2013) and the multi-argument event representation (Pichotta 254 and Mooney, 2014, 2016) due to their simple but condensed structure holding the most crucial information such as the action and the main participants in the main clause. We also evaluate different approaches to encode and inject such event knowledge to the model used by Yang et al.
(2021), while also taking the contextual information into account. We conduct our experiments from three perspectives. First, we explore two approaches for event knowledge injection: (1)
EVENT replaces the sentence describing the event by the two aforementioned event representations;
(2) SENTENCE+EVENT appends the two types of event representations to the sentence describing that event. Second, we compare the embeddings extracted from different layers of the text encoder based on the finding of Jawahar et al. (2019) and Vulic et al. ´ (2020), namely that lexical, syntactic and semantic information tend to be captured by the first, middle and last couple of layers, respectively.
And third, we study the contribution of contextualised embeddings to represent the event and its participants compared to local embeddings.
The main contributions of this paper are:
(1) comparison between two approaches for linguistically-inspired event knowledge injection for the task of multimodal procedural knowledge learning; (2) comparison of three levels of linguistic information in the text embedding; (3) investigation of local and contextualised event embeddings;
(4) assessment of different abstract representations for the implicit subject of instructional texts.
We find that appending the multi-argument event representation to the input sentence with the
<|startoftext|> token as the implicit subject, and taking the average of the last 4 hidden layers of CLIP's text encoder is the best way to encode and inject event knowledge to a deep learning model.
Specifically, first encoding the full sentence and then extracting and averaging the word-level embeddings of the components of the event representation can use the contextual information in the sentence outside the event itself.
## 2 Related Work 2.1 Event Definitions And Representations
The concept *event* can be defined in various ways.
In early works, an event is either defined as a verb
(Katz and Arosio, 2001), or an expression that have implicit time dimension and is either a verb or a noun phrase (Schilder and Habel, 2001), or a proposition consisting of the subject and the predicate (Filatova and Hovy, 2001). Pustejovsky et al. (2005)
define an event as a predicate describing a state or a circumstance in which something holds true. Li et al. (2021) define an event as the occurrence of an action causing a state change, which is performed by some participant(s) in a particular manner. For instance, image I3 in Figure 1 illustrates the event of A person beating together butter and sugar with a mixer.
Later studies on *script learning* (Zhang, 2022)
extend the definition of the event by its surrounding components in the text. Chambers and Jurafsky (2008) represent an event as a *(verb, dependency)*-pair extracted from narrative texts using a dependency parser. Balasubramanian et al. (2013)
generate event schemata from news articles using
(subject, verb, object)-pairs as the event representation. Pichotta and Mooney (2014, 2016) represent events as *(subject, verb, object, preposition)* tuples that model the interactions between entities in a script.
In contrast, recent works focus on extracting events with more complex structures and richer information from contexts. Yu et al. (2022) design a BERT-based framework for building event extractors in a weak supervised manner. Chen et al.
(2021) train a multimodal Transformer (Vaswani et al., 2017) to jointly extract events from videos and texts. Wei et al. (2023) propose a framework for zero-shot event extraction using a sibling model to InstructGPT (Ouyang et al., 2022). Knowledge graphs (Hogan et al., 2021) have been widely used to extract events from multimodal data and represent events in a more complex structure (Li et al., 2020, 2022). We adopt the relational event representation of Balasubramanian et al. (2013) and the multi-argument event representation of Pichotta and Mooney (2014) for our experiments due to the low performance of recent event extractors on the dataset used for our experiments.
## 2.2 Procedural Knowledge Understanding
A *procedure* is a compound event that can be broken down into multiple events (Zhang, 2022). It consists of a goal and a sequence of steps towards accomplishing that goal. Procedural knowledge understanding (PKU) is the task of learning the relations between the goal and the steps. Various approaches have been proposed to understanding procedures using event knowledge. Tandon et al.
(2020) use entity tracking to generate state changes from procedural text. Zhang et al. (2020b) learn goal–step relations and step–step temporal relations in procedural texts and introduce a 4-way multiple choice task for goal–step inference. Yang et al.
(2021) extend it to the multimodal domain and learn goal–step relations from texts and images.
Lyu et al. (2021) generate the sequence of steps conditioned on a given goal. Zhou et al. (2022)
discover the hierarchical structure in procedural knowledge using action linking. Based on the work of Yang et al. (2021), we investigate different ways to encode and inject classical event knowledge to recent deep learning models.
Goal–Step–Inference (Zhang et al., 2020b) is the task of reasoning about goal–step relations from instructional texts. Given a goal sentence and four candidate step descriptions, a model should choose the step that leads to the goal. The main challenge of this task is that it requires to understand both, the actions of goals and steps and their relations.
Yang et al. (2021) extend the task to the multimodal domain through the *Visual Goal–Step Inference* task, in which steps are described by images. They attempt to overcome the challenge by matching the goal sentence and the step image. However, they still observe a significant gap between model and human performance. Our work seeks to bridge this gap with multiple approaches by combining stateof-the-art neural models with early linguistically motivated event representations (see above).
## 2.3 Vision-And-Language Models
In recent years, Vision-and-Language (V&L) models have made tremendous progress on a wide range of multimodal tasks, such as visual commonsense reasoning (Lu et al., 2019), image–text retrieval
(Chen et al., 2020), text-to-image and image-totext generation (Rombach et al., 2022; Li et al.,
2023). One strand of models are *fusion encoders* which learn a fused representation of images and texts. For example, LXMERT (Tan and Bansal, 2019) uses attention (Vaswani et al., 2017) to learn intra-modal and cross-modal relationships while training a language encoder, an object relationship encoder and a cross-modality encoder. Although the model learns the alignment between images, objects and words in sentences via the object-level pretraining objectives, it does not understand the relations between the objects and the action. Another line of works propose *dual encoders* which learn separate encodings of images and language. A
prominent example is CLIP (Radford et al., 2021),
which uses a contrastive objective to train a text encoder (GPT-2, Radford et al., 2019) and an image encoder (e.g., ViT Dosovitskiy et al., 2020). CLIP
achieves state-of-the-art performance across multiple tasks. Different from LXMERT, CLIP is trained to match an image as a whole to a text description.
We use this advantage and extract image-grounded sentence embeddings using CLIP's text encoder.
Since CLIP applies a subtoken-level tokenization, the outputs of its text encoder are embeddings for the subtokens in the input sentence. Although it is a common practice to use the embedding of the classification token as the overall sentence embedding, this approach has been shown to be suboptimal
(Vulic et al. ´ , 2020). We conduct experiments to find the optimal sentence representation.
## 3 **Vgsi: Visual Goal–Step Inference Task**
Task Definition. Yang et al. (2021) define VGSI
as a 4-way multiple choice problem. As shown in the example in Figure 1, given a textual *goal* G and four images Ii, i ∈ {1, 2, 3, 4} representing four candidate *steps*, the task is to select the image that represents a correct step towards accomplishing G.
In this paper, we additionally explore a stricter definition of VGSI, where the task is to select the respective correct image of all steps that are necessary to reach the goal G.
## 3.1 Methods 3.1.1 Event Representations
To obtain event representations from goal and step sentences, we first extract the subject, verbal predicate, *direct object* and *prepositional phrase* from the sentences using a dependency parser (Dozat and Manning, 2016)
2.
Implicit Subject Representation. Due to the nature of the dataset of procedural instructions, textual goals and steps are usually imperative sentences, and as a consequence, the subject is left off. To encode the subject, we conduct experiments to compare event representations with no explicitly mentioned subject to those which express the subject (1) by the token *person*, or (2)
by the special *<|startoftext|>* token of the CLIP tokenizer. Since the *<|startoftext|>* token added by 2We use SuPar available at https://github.com/
yzhangcs/parser us is always between the *<|startoftext|>* token of the CLIP tokenizer and the verbal predicate, its embedding is supposed to capture syntactic information from these two surrounding tokens via the attention mechanism (i.e. the information about the position of the subject of a sentence). To verify this hypothesis, we conduct two groups of probing experiments using the most common and the least common token in the input text as the pseudosubject, respectively (see Section 5.1). We find that sentences with the *<|startoftext|>* token as the pseudo-subject lead to the best result.
Event Representations. The event representation is an essential component of our task. As introduced in Section 2, we represent events in the goal and step sentences using two types of representations: (1) the relational event representation (Balasubramanian et al., 2013) which is a (subject, verb, object) tuple, and (2) the multi-argument event representation (Pichotta and Mooney, 2014) which is a *(subject, verb, object, prepositional phrase)* tuple.
Table 1 shows examples of all representations we explore. In the case that the object or prepositional phrase is absent, we represent it by a *[PAD]* token, e.g., (*<|startoftext|>*, pour, sauce, *[PAD]*).
Local vs. Contextualised Event. To assess the effectiveness of event representations, we deliberately use non-contextualised embeddings to disentangle the *subj–pred–obj(–pp)* information from the overall sentence. In detail, the components of the event representations are concatenated to form a sentence, which is then encoded by the CLIP
text encoder (i.e. GPT-2). For instance, the event
(*<|startoftext|>*, pour, sauce) is turned into the input *<|startoftext|> pour sauce*. We compare this encoding method to one that uses contextualised embeddings: We first encode the whole sentence and extract all word embeddings. If the tokenizer split a word into subtokens, we mean-pool their corresponding embeddings. Then, we mean-pool the word embeddings which are part of the components of the event representations. For example, the word embeddings in the object phrase *into container or jug* are averaged to a single vector. Note that for both local and contextualised approaches, the CLIP tokenizer automatically adds a *<|startoftext|>* and an *<|endoftext|>* token to the start and the end of the input, respectively. We remove these two special tokens after the encoding, such that only the embedding of the *<|startoftext|>* as the
| Pour the soy or tamari sauce into | |
|-------------------------------------|-------------------------------------------------------|
| text | a suitable small mixing container or jug. |
| eventrel | (<|startoftext|>, pour, sauce) |
| eventmult | (<|startoftext|>, pour, sauce, into container or jug) |
implicit subject is averaged with other words. We evaluate the text embeddings obtained from three groups of layers of CLIP.3 The visual embeddings, in turn, are the last hidden state of the CLIP image encoder (i.e. ViT).4
## 3.1.2 **Triplet Network For Goal–Step Inference**
We use Triplet Network (Hoffer and Ailon, 2015)
in all our experiments and use the cosine similarity as the similarity metric.
Training. The triplet network for training is implemented as a three-branch network with a text module and an image module, where the two branches of the image module share the same parameters. The input is a triplet (G+S, Ipos, Ineg),
where G+S is the embedding of the concatenated goal–step sentence, Ipos is the embedding of the positive image, Ineg is the embedding of a negative image (see Section 4.3). The model learns a cross-modal embedding space by minimizing the distance between G+S and Ipos, while maximizing the distance between G+S and Ineg. Different from Yang et al. (2021) which use G as the textual input for training, we use G+S because S share common information with I and serves as a bridge between G and I. Thus, G+S could help the model to better understand the relation between G and I.
Inference. During inference, we follow the input format of Yang et al. (2021), i.e. the textual input is the goal alone. The model takes each pair (G, Ii), i ∈ {1, 2, 3, 4} from a test data point
(G, [I1, I2, I3, I4]) as input. By computing the similarity between G and Ii, the model predicts the correct step image ˆI as that with the highest simi-
Experiment group **Embed size #params Input format Event injection**
SENTENCE768 (text)
1024 (image) 3,936,256 goal+step (train)
goal (test) s
EVENT768 (text)
1024 (image) 3,936,256 goal+step (train)
goal (test) e
SENTENCE+EVENT1536 (text)
1024 (image) 4,722,688 goal+step (train)
goal (test) s+e
larity as follows:
$$\hat{I}=\arg\operatorname*{max}_{I_{i}}c o s(G,I_{i})$$
## 4 Experiments 4.1 Data
We conduct our experiments on **wikiHow-VGSI**
(Yang et al., 2021),5a dataset for multimodal goaloriented PKU collected from the English wikiHow6. The dataset contains articles of instructions to complete tasks across a wide range of daily-life topics, including health, home and garden, education, recipes etc. Each article contains a *goal G*
in the form of a "How to"-sentence and a set of methods (e.g., "How to bake mini cupcakes", Figure 1). Each method comprises a list of *steps*. Each step has a *step headline S* which is an imperative sentence describing that step, and an image I corresponding to that step (e.g., I1 and S in Fig. 1).
To describe a goal and its steps, we use the *goal G*
and the *step headline S* and its associated image I,
respectively.
We lowercase all the texts in the dataset, and use the special token *<|startoftext|>* to represent the subject in all sentences (i.e., *pseudo-subject*).
Specifically, *<|startoftext|>* substitutes *How to* in all goals and is prepended to all step headlines. Since we found some issues in the dataset, such as duplicates or non-English text, we removed 3 goals and 56 step headlines. Details to our filtering procedure are given in Appendix 9.1. As a result, the dataset used for our experiments contains 53, 186 goals, 772, 221 step headlines and 772, 277 step images.
## 4.2 Models
$$(1)$$
We assess the benefit of the two approaches for the event knowledge injection (relational and multiargument representations, see Sect. 3.1.1) when being used as the only representation of the goal G and step S during training (EVENT), or when being used as additional information to the full sentences
(SENTENCE+EVENT). We compare them against only using the full sentence (SENTENCE), which is also employed by Yang et al. (2021). Table 2 gives an overview of the different inputs and the corresponding hyperparameters of the models.
Jawahar et al. (2019) observed that the embeddings obtained from different layers of BERT tend to be dominated by different levels of linguistic information: surface (i.e. lexical) information in bottom layers, syntactic information in middle layers and semantic information in top layers. Thus, we examine sentence embeddings of three linguistic levels in each of these experiment groups: (1)
FIRST4 averages the outputs of the first 4 layers of CLIP's text encoder; (2) MIDDLE4 averages the outputs of the 5-th to the 8-th layers of the encoder;
(3) LAST4 averages the outputs of the last 4 layers.
## 4.2.1 E**Vent**
In this group of experiments, the goal and step sentences are replaced by the event representations extracted from them. For example, the sentence in Table 1 is replaced by *<|startoftext|> pour sauce* for the relational event representation and by *<|startoftext|> pour sauce into container or jug* for the multi-argument event representation.
## 4.2.2 Sentence+E**Vent**
In this group of experiments, the event representations are appended to the goal and step sentences.
For example, the aforementioned sentence is converted to *<|startoftext|> pour the soy or tamari* sauce into a suitable small mixing container or jug.
<|startoftext|> pour sauce. for the relational event representation, and *<|startoftext|> pour the soy or* tamari sauce into a suitable small mixing container or jug. <|startoftext|> pour sauce into container or jug. for the multi-argument event representation.
## 4.2.3 S**Entence**
While event representations have been found valuable in earlier, linguistically motivated research on procedural texts (see Section 2), it stands the question whether they fully provide the crucial information for learning procedural knowledge. Hence, we also compare against a model that takes the encoded full sentence describing the goal or the goal+step as textual input, i.e. the model learns the task-relevant features from the full goal sentence or the step headline.
## 4.3 Training Procedure
We apply the random sampling strategy of Yang et al. (2021) to select negative step images. For each data point, we randomly select three different articles and take a random image from each article as the negative step image. We leave the experiments with other sampling methods used in Yang et al. (2021) to future work.
We initialize the weights using He-uniform with ReLU non-linearity. All models are trained for 200 epochs with batch size 1024 and a learning rate of 1e-5 with early stopping. In each experiment group, the model is trained and evaluated five times. We implemented the models in Keras with Tensorflow 2.0 and trained them on a single RTX A6000.
## 4.4 Evaluation Measures
We evaluate our models with two settings. The first one, which we call **weak**, follows the original task definition by Yang et al. (2021), where a data point in the test set is considered correctly predicted, if one step towards the goal given by that data point is correctly selected. To better fit the concept of procedural knowledge, we also apply a **strict** setting, in which a data point is correctly predicted, if all the steps required to achieve the goal given by the data point are correctly selected. We report the mean accuracy obtained by the five individual training and testing runs, as well as the corresponding standard deviation.
## 5 Results
Tables 3 and 5 give the most important results. The full results can be found in Appendix 9.2.
## 5.1 Event-Based Representations
Table 3 shows the performance of the models with the *<|startoftext|>* token as pseudo-subject, using different event representations containing different levels of linguistic knowledge. The last two rows list the results of the best model and the human evaluation in Yang et al. (2021).
As expected, by comparing the EVENT*rel,*∗ and EVENT*mult,*∗ groups (i.e., <[2],[3]>, <[7],[8]>,
<[12],[13]>), we observe that the multi-argument event representation outperforms the relational event representation.
Linguistic Level Embedding. To find out which level of linguistic knowledge is most suitable for the task, we compare the following three groups of results in Table 3: <[2],[7],[12]>, <[3],[8],[13]> and <[5],[10],[15]>. On average, the LAST4 groups achieve the highest accuracy, while the FIRST4 groups perform the worst. The performance gap between FIRST4 and the other two groups is considerably larger than that between MIDDLE4 and LAST4. This indicates that both semantic and syntactic information play important roles in the task, while lexical information is far less important than syntactic and semantic information.
Event Knowledge Injection. The results of
<[3],[5]>, <[8],[10]>, and <[13],[15]> in Table 3 show that SENTENCE+EVENT results in higher accuracy than EVENT. This reveals the advantage of attaching event knowledge to the sentence over using only the event knowledge. It also implies that the sentence could provide additional information to the event, which could help models better understand procedural knowledge.
Local vs. Contextualised Embeddings. By comparing the results of local and contextualised event embeddings in Table 3, we observe a significant improvement of the performance in the latter group. On average, the accuracy with contextualised embeddings is 3.71% and 13.73% higher than that with the local ones in the **weak** setting and in the **strict** setting, respectively. This verifies the observation in the last paragraph that sentences provide additional, useful information.
| Models | Local Event | Contextualised Event | | |
|------------------------------------------|---------------|------------------------|----------|----------|
| weak | strict | weak | strict | |
| [2] EVENTrel,f irst4 | 68.9±0.3 | 9.9±0.3 | 71.6±0.4 | 12.2±0.3 |
| [3] EVENTmult,f irst4 | 75.8±0.4 | 15.3±0.5 | 77.0±0.1 | 15.9±0.2 |
| [5] SENTENCE+EVENTmult,f irst4 | 80.9±0.8 | 19.3±1.3 | 81.0±0.1 | 19.6±0.3 |
| [7] EVENTrel,middle4 | 70.3±0.2 | 11.1±0.2 | 74.9±0 | 14.9±0.1 |
| [8] EVENTmult,middle4 | 76.9±0.6 | 16.9±0.9 | 79.9±0 | 19.1±0.4 |
| [10] SENTENCE+EVENTmult,middle4 | 82.4±0.1 | 22.1±0.3 | 82.8±0.9 | 22.4±1.5 |
| [12] EVENTrel,last4 | 69.1±0.3 | 11.5±0.4 | 75.9±0 | 16.7±0.1 |
| [13] EVENTmult,last4 | 77.3±0.4 | 18.8±0.4 | 80.8±0 | 21.2±0.2 |
| [15] SENTENCE+EVENTmult,last4 | 81.1±0.7 | 21.5±0.8 | 84.7±0 | 26.4±0.2 |
| [16] EVENTmult,last4,+1layer | 76.6±0.3 | 17.9±0.2 | 80.5±0 | 20.7±0 |
| Triplet Net (BERT) (Yang et al., 2021) † | 72.8 | - | 72.8 | - |
| Human (Yang et al., 2021) | 84.5 | - | 84.5 | - |
Table 3: Accuracy (%) of experiments using different event representations encoded by different layers of the CLIP
text encoder. The implicit subject is represented by *<|startoftext|>* (**sot+sent**). †Results adopted from the authors, they are not directly comparable.
| Implicit(/Pseudo-)Subject | weak | strict |
|-----------------------------|--------|----------|
| sot+sent | 82.7 | 22.3 |
| person+sent | 80.3 | 19.9 |
| -+sent | 79.4 | 19.4 |
| sot | 24.2 | 0.11 |
| most-frequent+sent | 79.8 | 20.3 |
| least-frequent+sent | 68.6 | 10.4 |
Implicit Subject Abstract Representation. The sentences in the dataset either begin with *How to*,
or they do not have an explicit subject. Thus, we assess the contribution of different abstract representations for the implicit subject of the sentences. Table 4 (top) shows the performance of the SENTENCE*middle*4 models with four abstract representations as the subject. The results show that *<|startoftext|>* is the most powerful abstract representation for the subject. However, we observe a significant performance degradation when using this token separately as the representation of the whole sentence (i.e. sot in Table 4). In this case, the embedding of *<|startoftext|>* is derived from the last hidden state of CLIP's text encoder. A possible reason could be that the *<|startoftext|>* token is always located between the verbal predicate and the *<|startoftext|>* token added by CLIP's tokenizer which indicates the start of the sentence. Hence, its embedding may capture syntactic information about the subject's position in the sentence from these contextual tokens via the attention mechanism. To verify this hypothesis, we conduct two groups of probing experiments for the syntactic information in the *<|startoftext|>* token. We evaluate the SENTENCE*middle*4 model by taking the most and the least frequent token in the dataset ("."
and "50.0", respectively) as a pseudo-subject of the input text, as we assume them to be generally less informative for the sentences. We observe a considerable performance drop with the least frequent token (see Table 4, bottom), indicating that
<|startoftext|> indeed gives the model valuable cues about the subject position in a sentence.
## 5.2 Event-Enhanced Sentences
Table 5 compares the performance of using sentence-only embeddings with using eventenhanced sentence embeddings. As a result, SEN-TENCE+EVENT outperforms SENTENCE with contextualised event embeddings when using the average of the last 4 hidden layers of the CLIP text encoder. The groups using the first 4 and middle 4 layers achieve comparable performance. Moreover, the best model (i.e., [15]) reaches the human upper
| Models | Local Event | Contextualised Event | | |
|------------------------------------------|---------------|------------------------|----------|----------|
| weak | strict | weak | strict | |
| [1] SENTENCEf irst4 | 81.6±0.1 | 20.1±0.1 | 81.2±0.0 | 19.7±0.2 |
| [5] SENTENCE+EVENTmult,f irst4 | 80.9±0.8 | 19.3±1.3 | 81.0±0.1 | 19.6±0.3 |
| [6] SENTENCEmiddle4 | 82.7±0.4 | 22.3±0.5 | 82.7±1.1 | 22.2±1.7 |
| [10] SENTENCE+EVENTmult,middle4 | 82.4±0.1 | 22.1±0.3 | 82.8±0.9 | 22.4±1.5 |
| [11] SENTENCElast4 | 82.1±0.4 | 22.3±0.7 | 84.6±0.1 | 26.0±0.2 |
| [15] SENTENCE+EVENTmult,last4 | 81.1±0.7 | 21.5±0.8 | 84.7±0.0 | 26.4±0.2 |
| Triplet Net (BERT) (Yang et al., 2021) † | 72.8 | - | 72.8 | - |
| Human (Yang et al., 2021) | 84.5 | - | 84.5 | - |
Table 5: Accuracy (%) and standard deviation of the experiments using different event representations encoded by different layers of the CLIP text encoder.
bound, demonstrating the necessity of applying the strict evaluation setting.
## 5.3 Disentangle The Influence Of Model Sizes And Embeddings
Since the models in the SENTENCE+EVENT group have more trainable parameters due to the concatenation of sentence- and event embeddings, the performance gain could attribute either to the number of parameters or to the embeddings. To disentangle the influence of these two factors, we conduct an experiment based on EVENT*mult,last*4, with the text module of the triplet network being extended by an additional dense layer. This increases the number of trainable parameters of the model to 4, 750, 973, which is comparable with the most effective SENTENCE+EVENT*mult,last*4 models. The results of [16] in Table 3 show that there is no considerable change in performance from [13] and [15], indicating that the performance gain is due to attaching the event representation to the sentence.
## 6 Qualitative Analysis
We provide a qualitative analysis on the semantic gap between the ground-truth and the predicted images. Figure 2 shows part of an example of the model's predictions for the goal *How to stop* twitching in your sleep? In this example, four out of ten steps are incorrectly predicted.
For Step 5, the textual input for training is <|startoftext|> stop twitching in your sleep.
<|startoftext|> exercise every day. The model selects Image (e) which depicts a hand holding a heart. The model may associate "twitching" with the heart in the image, but fails to infer the relation between "twitching" and the jogging people in the correct image (a). Thus, the model may not learn causal relationships between the goal and the step image, such as "Jogging can improve people's health condition and thus stop twitching in the sleep".
For Step 7 with the textual input <|startoftext|>
stop twitching in your sleep. <|startoftext|> eat plenty of magnesium., the model selects Image (f) illustrating a person sitting at a laptop. Possible reasons could be: (1) The action "eat" is usually performed by humans, but the correct image only describes some food, which the model misses to associate with "eat"; and (2) The phrase "plenty of magnesium" may mislead the model to select the wrong image with a laptop, which is associated more with magnesium than vegetables. Hence, the model may only learn knowledge about simple, superficial properties of the objects in images, and may lack more complex commonsense knowledge about the relations between objects, such as
"Laptop is not edible" or "Human cannot take magnesium by eating laptops".
For Step 8, the input is <|startoftext|> stop twitching in your sleep. <|startoftext|> adjust what you consume before bed. The model selects the image showing a lady with a hat being pointed to by an arrow. This again indicated that the model's decision heavily relies on the verb. Furthermore, it also suggests that the model has limited capability of identifying the affordances of the objects in the image and associating them with the goal.
For Step 10 with the input <|startoftext|> stop twitching in your sleep. <|startoftext|> address potential vitamin deficiencies., the model again
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![8_image_5.png](8_image_5.png)
![8_image_4.png](8_image_4.png)
![8_image_3.png](8_image_3.png)
put on a sun hat to protect your hair and keep you cool.
![8_image_2.png](8_image_2.png)
![8_image_6.png](8_image_6.png)
seems to not capture causal relationships such as
"Vitamin deficiency can lead to twitching in sleep",
but to base its inference on shallow object features such as "A man opens the door and wakes the sleeping woman up".
In conclusion, our observations indicate that the model's decision highly depends on shallow features in the image and their alignment to the verbs and nouns in the text, while its effectiveness is impaired by its limited understanding of deeper semantics and causal relationships between the goal and the step images.
## 7 Conclusions
In this paper, we investigate two linguisticallyinspired event knowledge injection approaches for the Visual Goal–Step Inference (VGSI) task. We experimentally compare three levels of linguistic information in the text embedding produced by stateof-the-art neural deep learning models. Furthermore, we also compare event embeddings which encode only the information of the event components themselves with contextualised event embeddings which include information about the overall sentence syntactically not belonging to the arguments forming an event representation itself. Last but not least, we assess different representations for the implicit subject of instructional sentences. We find that the early, linguistically inspired methods for representing event knowledge do contribute to understand procedures in combination with modern V&L models.
## 8 Limitations
We explore early, very simple structured event representations. Recent works in visual–linguistic semantic representations which use richer representations comprising predicate–argument structures and event types and argument roles, the general graph-based approaches, as well as scene graphs, are left for future work. Furthermore, the wikiHow articles may reflect the bias of their human authors.
## Acknowledgements
We would like to thank Professor Parisa Kordjamshidi for her valuable feedback in the PreSubmission Mentorship Program. We are also grateful to the anonymous reviewers for their detailed comments on our work. We would further like to thank Yue Yang for meaningful discussions about the VGSI task.
## References
Niranjan Balasubramanian, Stephen Soderland, Oren Etzioni, et al. 2013. Generating coherent event schemas at scale. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1721–1731.
Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In *Proceedings of ACL-08: HLT*, pages 789–797.
Brian Chen, Xudong Lin, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum, Heng Ji, and Shih-Fu Chang. 2021. Joint multimedia event extraction from video and article. *arXiv preprint* arXiv:2109.12776.
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In *Computer Vision–ECCV*
2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXX, pages 104–
120. Springer.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020.
An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint* arXiv:2010.11929.
Timothy Dozat and Christopher D Manning. 2016.
Deep biaffine attention for neural dependency parsing. *arXiv preprint arXiv:1611.01734*.
Elena Filatova and Eduard Hovy. 2001. Assigning timestamps to event-clauses. In Proceedings of the ACL
2001 Workshop on Temporal and Spatial Information Processing.
Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In Similarity-Based Pattern Recognition: Third International Workshop, SIMBAD 2015, Copenhagen, Denmark, October 12-14, 2015. Proceedings 3, pages 84–92. Springer.
Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard de Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, et al. 2021. Knowledge graphs. *ACM Computing Surveys (CSUR)*, 54(4):1–
37.
Ganesh Jawahar, Benoît Sagot, and Djamé Seddah.
2019. What does bert learn about the structure of language? In *ACL 2019-57th Annual Meeting of the* Association for Computational Linguistics.
Graham Katz and Fabrizio Arosio. 2001. The annotation of temporal information in natural language sentences. In *Proceedings of the ACL 2001 Workshop* on Temporal and Spatial Information Processing.
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.
2023. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. *arXiv preprint arXiv:2301.12597*.
Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, and Shih-Fu Chang. 2022. Clip-event: Connecting text and images with event structures. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16420–16429.
Manling Li, Alireza Zareian, Ying Lin, Xiaoman Pan, Spencer Whitehead, Brian Chen, Bo Wu, Heng Ji, Shih-Fu Chang, Clare Voss, et al. 2020. Gaia: A
fine-grained multimedia knowledge extraction system. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics: System Demonstrations, pages 77–86.
Qian Li, Jianxin Li, Jiawei Sheng, Shiyao Cui, Jia Wu, Yiming Hei, Hao Peng, Shu Guo, Lihong Wang, Amin Beheshti, et al. 2021. A compact survey on event extraction: Approaches and applications. arXiv preprint arXiv:2107.02126.
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee.
2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. *Advances in neural information processing systems*, 32.
Qing Lyu, Li Zhang, and Chris Callison-Burch. 2021.
Goal-oriented script construction. arXiv preprint arXiv:2107.13189.
Dena Mujtaba and Nihar Mahapatra. 2019. Recent trends in natural language understanding for procedural knowledge. In 2019 International Conference on Computational Science and Computational Intelligence (CSCI), pages 420–424. IEEE.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155.
Karl Pichotta and Raymond Mooney. 2014. Statistical script learning with multi-argument events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 220–229.
Karl Pichotta and Raymond Mooney. 2016. Learning statistical scripts with lstm recurrent neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30.
James Pustejovsky, Robert Ingria, Roser Sauri, José M
Castaño, Jessica Littman, Robert J Gaizauskas, Andrea Setzer, Graham Katz, and Inderjeet Mani. 2005.
The specification language timeml.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pages 8748–8763. PMLR.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 10684–10695.
Frank Schilder and Christopher Habel. 2001. From temporal expressions to temporal information: Semantic tagging of news messages. In *Proceedings* of the ACL 2001 workshop on temporal and spatial information processing.
Roger Shank and Robert Abelson. 1977. Scripts, plans, goals and understanding.
Chenkai Sun, Tie Xu, ChengXiang Zhai, and Heng ji.
2022. Incorporating task-specific concept knowledge into script learning.
Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. *arXiv preprint arXiv:1908.07490*.
Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi Mishra, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, and Eduard Hovy. 2020.
A dataset for tracking entities in open domain procedural text. *arXiv preprint arXiv:2011.08092*.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, ´
Goran Glavaš, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics.
arXiv preprint arXiv:2010.05731.
Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, et al. 2023. Zeroshot information extraction via chatting with chatgpt.
arXiv preprint arXiv:2302.10205.
Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, and Chris Callison-Burch. 2021. Visual goal-step inference using wikihow. *arXiv* preprint arXiv:2104.05845.
Zi Yang and Eric Nyberg. 2015. Leveraging procedural knowledge for task-oriented search. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 513–522.
Pengfei Yu, Zixuan Zhang, Clare Voss, Jonathan May, and Heng Ji. 2022. Building an event extractor with only a few examples. In *Proceedings of the Third* Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 102–109.
Li Zhang. 2022. Reasoning about procedures with natural language processing: A tutorial. *arXiv preprint* arXiv:2205.07455.
Li Zhang, Qing Lyu, and Chris Callison-Burch. 2020a.
Intent detection with wikihow. *arXiv preprint* arXiv:2009.05781.
Li Zhang, Qing Lyu, and Chris Callison-Burch. 2020b.
Reasoning about goals, steps, and temporal ordering with wikihow. *arXiv preprint arXiv:2009.07690*.
Shuyan Zhou, Li Zhang, Yue Yang, Qing Lyu, Pengcheng Yin, Chris Callison-Burch, and Graham Neubig. 2022. Show me more details: Discovering hierarchies of procedures from semi-structured web data. *arXiv preprint arXiv:2203.07264*.
Yilun Zhou, Julie A Shah, and Steven Schockaert. 2019.
Learning household task knowledge from wikihow descriptions. *arXiv preprint arXiv:1909.06414*.
## 9 Appendix 9.1 Data Preprocessing And Cleaning
1. We remove the goals with file-ID *385799* and 5323060, as they contain non-English words.
2. Two data points share the same file-ID *411540*,
each refers to the goal How to keep healthy family relationships and *How to keep relationships healthy within your family*. The first data point is automatically removed when building a mapping from file-IDs to goals.
3. We remove the step headlines with step-IDs 1926747_3_0, *2191502_0_0* and *985548_2_0*,
since they contain only a dot (.) and cannot be parsed by the dependency parser.
## 9.2 Full Table Of The Results
As a supplement to Table 3 and Table 5, Table 6 shows the results of all experiment groups.
| Experiments | Local Event | Contextualised Event | | |
|------------------------------------------|---------------|------------------------|----------|----------|
| weak | strict | weak | strict | |
| [1] SENTENCEf irst4 | 81.6±0.1 | 20.1±0.1 | 81.2±0 | 19.7±0.2 |
| [2] EVENTrel,f irst4 | 68.9±0.3 | 9.9±0.3 | 71.6±0.4 | 12.2±0.3 |
| [3] EVENTmult,f irst4 | 75.8±0.4 | 15.3±0.5 | 77.0±0.1 | 15.9±0.2 |
| [4] SENTENCE+EVENTrel,f irst4 | 79.9±0.3 | 17.9±0.7 | 80.4±0.1 | 18.6±0.1 |
| [5] SENTENCE+EVENTmult,f irst4 | 80.9±0.8 | 19.3±1.3 | 81.0±0.1 | 19.6±0.3 |
| [6] SENTENCEmiddle4 | 82.7±0.4 | 22.3±0.5 | 82.7±1.1 | 22.2±1.7 |
| [7] EVENTrel,middle4 | 70.3±0.2 | 11.1±0.2 | 74.9±0 | 14.9±0.1 |
| [8] EVENTmult,middle4 | 76.9±0.6 | 16.9±0.9 | 79.9±0 | 19.1±0.4 |
| [9] SENTENCE+EVENTrel,middle4 | 81.8±0.3 | 21.2±0.3 | 81.8±1.1 | 20.4±1.8 |
| [10] SENTENCE+EVENTmult,middle4 | 82.4±0.1 | 22.1±0.3 | 82.8±0.9 | 22.4±1.5 |
| [11] SENTENCElast4 | 82.1±0.4 | 22.3±0.7 | 84.6±0.1 | 26.0±0.2 |
| [12] EVENTrel,last4 | 69.1±0.3 | 11.5±0.4 | 75.9±0 | 16.7±0.1 |
| [13] EVENTmult,last4 | 77.3±0.4 | 18.8±0.4 | 80.8±0 | 21.2±0.2 |
| [14] SENTENCE+EVENTrel,last4 | 80.3±0.6 | 20.2±1.0 | 84.1±0.4 | 25.2±1.1 |
| [15] SENTENCE+EVENTmult,last4 | 81.1±0.7 | 21.5±0.8 | 84.7±0 | 26.4±0.2 |
| [16] EVENTmult,last4,+1layer | 76.6±0.3 | 17.9±0.2 | 80.5±0 | 20.7±0 |
| Triplet Net (BERT) (Yang et al., 2021) † | 72.8 | - | 72.8 | - |
| Human (Yang et al., 2021) | 84.5 | - | 84.5 | - |
Table 6: Accuracy (%) of experiments using different event representations encoded by different layers of the CLIP
text encoder (full table). |
schoch-etal-2023-data | Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values | https://aclanthology.org/2023.acl-srw.37 | Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language models. To address this, we propose TS-DShapley, an algorithm that reduces computational cost of Shapley-based data valuation through: 1) an efficient sampling-based method that aggregates Shapley values computed from subsets for valuation of the entire training set, and 2) a value transfer method that leverages value information extracted from a simple classifier trained using representations from the target language model. Our experiments applying TS-DShapley to select data for fine-tuning BERT-based language models on benchmark natural language understanding (NLU) datasets show that TS-DShapley outperforms existing data selection methods. Further, TS-DShapley can filter fine-tuning data to increase language model performance compared to training with the full fine-tuning dataset. | # Data Selection For Fine-Tuning Large Language Models Using Transferred Shapley Values
Stephanie Schoch Ritwick Mishra Yangfeng Ji Department of Computer Science University of Virginia Charlottesville, VA 22904
{sns2gr,mbc7bu,yangfeng}@virginia.edu
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapleybased data valuation to fine-tuning large pretrained language models. To address this, we propose TS-DSHAPLEY, an algorithm that reduces computational cost of Shapley-based data valuation through: 1) an efficient samplingbased method that aggregates Shapley values computed from subsets for valuation of the entire training set, and 2) a value transfer method that leverages value information extracted from a simple classifier trained using representations from the target language model. Our experiments applying TS-DSHAPLEY to select data for fine-tuning BERT-based language models on benchmark natural language understanding (NLU) datasets show that TS-DSHAPLEY
outperforms existing data selection methods.
Further, TS-DSHAPLEY can filter fine-tuning data to increase language model performance compared to training with the full fine-tuning dataset.
## 1 Introduction
Large language models (LMs) have achieved stateof-the-art performance on many natural language processing (NLP) tasks (Radford et al., 2019; Brown et al., 2020; Sanh et al., 2022). To adapt these models to new datasets and tasks, the standard approach is to fine-tune a pre-trained LM on a targeted downstream task. This allows the pre-trained general linguistic knowledge to be leveraged while fine-tuning to learn the task-specific information.
However, during fine-tuning, pre-trained LMs are prone to significant performance degradation in the presence of noisy data (Srivastava et al., 2020).
This effect may be further amplified when noisy or otherwise harmful instances are highly influential to the model parameters (Koh and Liang, 2017).
As a result, it is important to identify harmful instances in the fine-tuning data that may obfuscate the task information and degrade performance.
To automatically identify harmful data, prior works have used training dynamics (Swayamdipta et al., 2020) and estimation of marginal contributions via leave-one-out retraining (Cook, 1977) or influence functions (Koh and Liang, 2017). Shapley values, which satisfy certain desirable fairness guarantees, have also recently been adopted from cooperative game theory to measure datum contributions, where a data point's Shapley value is the average marginal contribution to every possible data subset (Ghorbani and Zou, 2019).
In practice, Shapley-based data values are approximated using various techniques (Ghorbani and Zou, 2019; Jia et al., 2019b, 2021; Kwon and Zou, 2022; Schoch et al., 2022), as exact Shapley value computation over a dataset would require *exhaustively retraining the model* for every datum on 266 every possible subset (i.e. exponential complexity with respect to the number of data points). However, many of the existing approximation methods still exhibit a computational bottleneck when considering datasets and models at scale (e.g. datasets larger than 5K instances). This, in turn, directly limits the application of Shapley-based data valuation to state-of-the-art LMs and many NLP datasets.
To address the challenges posed by 1)
the *model constraint* (the model retraining requirement) and 2) the *dataset constraint* (the time-complexity/dataset size relation), we propose Transferred Sampling Data Shapley (TSDSHAPLEY), an algorithm that utilizes two novel components that directly address each constraint.
Specifically, to address the model constraint, we propose to compute Shapley-based data values using a simple, linear model that is trained on the learned representation from the target LM. Additionally, to address the dataset constraint, we propose a sampling-based method that computes Shapley values on data subsets and aggregates them for valuation of the entire training set.
Our contributions are as follows: 1) we propose a sampling-based data Shapley computation method and demonstrate its efficacy empirically using as little as 2% of the original training data; 2) we propose the use of a simple linear classifier with a target model's pre-trained representation and demonstrate empirically the performance gains achieved over alternate pre-trained embeddings; and 3) we show the efficacy of Shapley-based data valuation and selection methods on benchmark NLU tasks using fine-tuned large LMs.1
## 2 Related Work
While Shapley values are often applied in a post hoc manner following model training (Ghorbani and Zou, 2019; Kwon and Zou, 2022; Jia et al.,
2019a,b, 2021; Schoch et al., 2022), the demonstrated efficacy makes it a natural extension to apply such methods for data selection *prior to* training. To this end, Shapley values have been used for evaluating data for transfer learning (Parvez and Chang, 2021) and in active learning (Ghorbani et al., 2021).
Further, although Shapley-based data values have primarily been considered model-specific, in practice, a subset of training instances that may 1Code is available at https://github.com/
stephanieschoch/ts-dshapley harm performance may be mislabeled (Koh and Liang, 2017; Swayamdipta et al., 2020; Ghorbani and Zou, 2019) or exhibit spelling mistakes or grammatical errors (Sun et al., 2020; Srivastava et al., 2020), which should be intrinsic to the dataset. Prior works have demonstrated the transferability of Shapley-based data values across various classifier architectures (Schoch et al., 2022)
and have demonstrated the efficacy of surrogate KNN classifiers using pre-trained embeddings (Jia et al., 2021). Notably, our work differs in that we utilize the pre-trained embeddings extracted from the target LM and avoid the k-nearest neighbor assumption that training data far from a test datum do not contribute to its prediction (Jia et al., 2019a).
## 3 Method
Let D = {(xi, yi)}
n i=1 denote a training set containing n training instances. For each training instance i, the Shapley value ϕiis defined as the average marginal contribution of i to every possible subset S ⊆ D that contains this instance (Ghorbani and Zou, 2019):
ϕi =PS⊆D;i∈S1
(
n−1 |S\{i}|)
{vA(S) − vA(S\{i})}
where vA(S) is a value function, typically defined as the development accuracy of model A trained on S. The challenge of calculating ϕiis two-fold:
the exponential complexity of all possible subsets S ⊆ D and the computational cost of training A
on each S and S\{i}. While Shapley-based data values are approximated in practice, most existing approximation methods are not efficient enough for large scale learning problems.
## 3.1 Ts-Ds**Hapley**
Let Atgt be the target classifier (i.e. large LM)
that we want to fine-tune on a subset of D. To reduce computational cost, we propose to (1) use a linear classifier Asrc as the proxy of Atgt for data valuation; (2) use multi-chain Monte Carlo sampling to compute Shapley values on different subsets of D. For faithful data valuation, we further propose to train Asrc on the data representations extracted from Atgt.
Representation Extraction. We extract the representations from the penultimate layer of the pretrained LM Atgt as the inputs for training Asrc.
Note that training Asrc in this way is equivalent to fixing the LM and only fine-tuning the last classification layer. To further remove the redundancy in the representations and reduce computational cost, we follow prior work by performing PCA on the collection of representations and selecting the first 32 principal components (Ghorbani and Zou, 2019; Kwon and Zou, 2022; Schoch et al., 2022).
Sampling Data Shapley. Instead of directly estimating Shapley-based data values via Monte Carlo sampling on the whole training set, our approach performs Monte Carlo sampling on subsets of the data, which we refer to as *sampling chains*. Within a single sampling chain c, we sample a subset of training instances St, estimate their contributions, and repeat T times. The contribution of each instance in Stis calculated by removing one instance at a time in a random order. For example, the contribution of the first randomly removed instance i is cSt(i) = vAsrc (St) − vAsrc (St\{i}), the contribution of the second randomly removed instance k is cSt(k) = vAsrc (St\{i}) − vAsrc (St\{*i, k*}),
and so on. On the other hand, if an instance i is not in St, cSt(i) = 0.
After T times, the Shapley value of instance i is approximated as ϕi ≈
1 T
PSt cSt(i). To balance the computational efficiency and approximation, we empirically define a range of the size |St| ∈
[
s 2
, s], with subset size s as the sampling upper bound.
Computation can be further sped up with multiple Monte Carlo sampling chains S
(c)
t, c ∈
{1*, . . . , J*}. The corresponding value approximation is defined as ϕi =
1 J
Pc 1 T
PS
(c)
t cS
(c)
t
(i). As each chain can be computed independently, the efficiency can be boosted with parallel computing. This novel idea of multi-chain sampling serves as the core of TS-DSHAPLEY and significantly speeds up computation, in practice working with a simple model Asrc.
Data Selection with TS-DSHAPLEY **Values.** To identify harmful data points, we use the data removal strategy of Ghorbani and Zou (2019) on Asrc and transfer the selection outcome to the target model Atgt. Specifically, we gradually remove training instances from the lowest estimated contribution value to the highest estimated contribution value. Following each removal, we retrain Asrc and evaluate predictive performance on the heldout development data. As a result, this removal procedure will identify a optimal subset Sopt that gives the best predictive performance on Asrc. With the assumption of data value transferability (Schoch et al., 2022), we expect that Atgt trained on Sopt will give no worse, and likely better performance, than Atgt trained on D. While this data removal strategy is proposed in prior work (Ghorbani and Zou, 2019), the data selection use case is novel in NLP.
## 4 Experiments 4.1 Experiment Setup
Pre-trained Large Language Models. We utilize two transformer-based large LMs for which traditional Shapley-based data value computation would be intractable: RoBERTa-base (Liu et al.,
2019, 125M parameters) and DistilBERT (Sanh et al., 2019, 66M parameters).
Datasets. We select one GLUE benchmark
(Wang et al., 2019) dataset from each task category: SST-2 (Socher et al., 2013), QQP (Iyer et al.,
2017), and RTE (Dagan et al., 2006), representing Single-Sentence Tasks, Similarity and Paraphrase Tasks, and Inference Tasks, respectively. Additional dataset details are reported in Appendix A.
Notably, we select datasets of varied sizes to reflect diverse sampling subset to training set size ratios.
Data Selection Baselines. We compare against performance when training on the full data subset as well as three selection baselines: leave-one-out
(LOO) (Cook, 1977), KNN-shapley (KNN) (Jia et al., 2019a, 2021), and random sampling. For LOO, we use the same classifier architecture as with TS-DSHAPLEY to compute value estimates.
For both LOO and KNN, we reduce the dataset using the data removal procedure defined in section 3.
Finally, for random sampling, we remove a random sample of data points equal to the number of points removed via TS-DSHAPLEY.
## 4.2 Data Selection Experiment
To test the efficacy of using TS-DSHAPLEY to select data for fine-tuning large LMs, we compute data values using each method and perform the data removal procedure described in section 3. Specifically, we remove the lowest value data points preceding the data removal step that achieved the highest development accuracy using Asrc. For TSDSHAPLEY, we vary the subset size and number of chains based on dataset size, using subset size
= 6.7k(10%), 7.28k(2%), 374(15%) and number of chains = 25, 10, 25 for SST-2, QQP, and RTE,
| Method Category | Method | RoBERTa | DistilBERT | | | | |
|--------------------------|--------------------|-----------|--------------|-------|-------|-------|-------|
| SST-2 | QQP | RTE | SST-2 | QQP | RTE | | |
| Liu et al. (2019) | 0.948 | 0.919 | 0.787 | - | - | - | |
| Full Training Set | Sanh et al. (2019) | - | - | - | 0.913 | 0.885 | 0.599 |
| Full Dataset | 0.950 | 0.917 | 0.788 | 0.908 | 0.905 | 0.618 | |
| Leave-One-Out | 0.947 | - | 0.784 | 0.912 | - | 0.614 | |
| Data Selection Baselines | KNN Shapley | 0.946 | 0.916 | 0.781 | 0.911 | 0.905 | 0.622 |
| Random | 0.947 | 0.917 | 0.684 | 0.911 | 0.905 | 0.589 | |
| Our Method | TS-DSHAPLEY | 0.953 | 0.919 | 0.801 | 0.915 | 0.907 | 0.652 |
respectively. Additional training and hyperparameter details, including details of a limited hyperparameter sweep, can be found in Appendix A.
Results Results are shown in Table 1. TSDSHAPLEY consistently outperforms baseline selection methods as well as performance using the full fine-tuning dataset. Notably, data selection using TS-DSHAPLEY resulted in performance improvements of up to 1.3% and 3.4% for RoBERTa and DistilBERT, respectively, over the predictive performance when training using the full fine-tuning dataset. These results indicate TSDSHAPLEY successfully identifies data points that harm model performance. As an additional analysis, for the RTE dataset we show the location of harmful points identified by TS-DSHAPLEY
on a data map (Swayamdipta et al., 2020) in Appendix B.
## 4.3 Sampling Hyperparameter Analysis
TS-DSHAPLEY exhibited good performance for data selection across various subset sizes and numbers of chains. For example, on QQP TSDSHAPLEY outperformed the full dataset and baseline methods when using a subset of just 2% of the training set. To better understand the impact of different parameter values, we utilize a parameter value grid on the RTE dataset and re-compute TS-DSHAPLEY. Specifically, using the best hyperparameters from subsection 4.2 (see Appendix A),
we evaluate performance of RoBERTa and DistilBERT using a parameter sweep of subset size as a percentage of the total training set size, subset size ∈ {1, 2, 5, 10, 15}%, and number of chains
∈ {2, 5, 10, 15} and report the Pearson's correlation between each parameter and performance.
Results. All correlations are reported in Appendix B and summarized here. When subset
| Model | Embeddings | SST-2 | QQP | RTE |
|------------|--------------|---------|-------|-------|
| RoBERTa | 0.953 | 0.919 | 0.801 | |
| RoBERTa | DistilBERT | 0.951 | 0.906 | 0.762 |
| GloVe | 0.948 | 0.908 | 0.767 | |
| DistilBERT | 0.915 | 0.907 | 0.652 | |
| DistilBERT | RoBERTa | 0.906 | 0.903 | 0.623 |
| GloVe | 0.909 | 0.903 | 0.632 | |
Table 2: Predictive accuracy using TS-DSHAPLEY
with different word embeddings.
size > 2%, both models demonstrate a high positive correlation between number of chains and performance. For example, when using 15% of the training data, RoBERTa on RTE had a correlation of 0.94. Across the different number of chains, however, there was no consistent pattern of correlation between subset size and performance. This indicates that increasing number of chains (which can be computed in-parallel) may be of more benefit compared to increasing sampling subset size.
## 4.4 Effect Of Different Embeddings
To test the efficacy of computing TS-DSHAPLEY
using the extracted representations from the target LM, we perform an experiment where we use the removal indices computed with 1) the representation from a different language model (e.g. removing indices for fine-tuning RoBERTa using the optimal removal index identified using DistilBERT data representations), and 2) GloVe pre-trained word embeddings (Pennington et al., 2014), as a thirdparty representation repository.
Results. As shown in Table 2, while alternate embeddings can still lead to improvements over the full data, using the representation from the target LM is beneficial and consistently outperforms other embeddings. The results suggest that low value data is likely a combination of (i) inherently noisy data (e.g. mislabeled instances) and (ii) instances that are harmful to specific models due to different model architectures and pre-training strategies.
## 5 Conclusion
In this work, we propose TS-DSHAPLEY to address the model and dataset constraints that currently contribute to a computational bottleneck when computing Shapley-based data value estimates.
## Limitations
While we demonstrate the efficacy of TSDSHAPLEY empirically, the current work is limited in terms of theoretical analysis. For example, while we have good empirical performance with a linear SVM, additional analysis could determine if there are optimal ways to select an alternative simple model architecture for the source classifier depending on the target classifier or dataset. Additionally, while we found a strong correlation between number of sampling chains and performance when the subset size was > 2% of the training data size, the lower subset size threshold to observe this correlation may be dataset dependent, which additional analysis could address.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
R Dennis Cook. 1977. Detection of influential observation in linear regression. *Technometrics*, 19(1):15–
18.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, pages 177–190.
Springer.
Amirata Ghorbani and James Zou. 2019. Data shapley: Equitable valuation of data for machine learning.
In *International Conference on Machine Learning*,
pages 2242–2251. PMLR.
Amirata Ghorbani, James Zou, and Andre Esteva. 2021.
Data shapley valuation for efficient batch active learning. *arXiv preprint arXiv:2104.08312*.
Shankar Iyer, Nikhil Dandekar, and Kornel Csernai.
2017. First quora dataset release: Question pairs.
Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nezihe Merve Gurel, Bo Li, Ce Zhang, Costas Spanos, and Dawn Song. 2019a. Efficient task-specific data valuation for nearest neighbor algorithms. *Proceedings of the VLDB Endowment*,
12(11):1610–1623.
Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nick Hynes, Nezihe Merve Gürel, Bo Li, Ce Zhang, Dawn Song, and Costas J Spanos. 2019b. Towards efficient data valuation based on the shapley value. In *The 22nd International Conference on Artificial* Intelligence and Statistics, pages 1167–1176. PMLR.
Ruoxi Jia, Fan Wu, Xuehui Sun, Jiacen Xu, David Dao, Bhavya Kailkhura, Ce Zhang, Bo Li, and Dawn Song. 2021. Scalability vs. utility: Do we have to sacrifice one for the other in data importance quantification?
In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 8239–
8247.
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR.
Yongchan Kwon and James Zou. 2022. Beta shapley: a unified and noise-reduced data valuation framework for machine learning. *Proceedings of the 25th International Conference on Artificial Intelligence and* Statistics (AISTATS) 2022.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Md Rizwan Parvez and Kai-Wei Chang. 2021. Evaluating the values of sources in transfer learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5084–5116.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108.
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja,
et al. 2022. Multitask prompted training enables zeroshot task generalization. In The Tenth International Conference on Learning Representations.
Stephanie Schoch, Haifeng Xu, and Yangfeng Ji. 2022.
Cs-shapley: Class-wise shapley values for data valuation in classification. In *Advances in Neural Information Processing Systems*.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642.
Ankit Srivastava, Piyush Makhija, and Anuj Gupta.
2020. Noisy text data: Achilles' heel of bert. In Proceedings of the Sixth Workshop on Noisy Usergenerated Text (W-NUT 2020), pages 16–21.
Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, and Caiming Xiong. 2020.
Adv-bert: Bert is not robust on misspellings! generating nature adversarial samples on bert. *arXiv* preprint arXiv:2003.04985.
Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9275–9293.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In the Proceedings of ICLR.
## A Additional Experiment Details
In this section, we include additional experiment setup details.
## A.1 Datasets
Dataset statistics are provided in Table 3, with further description provided below.
SST-2: Stanford Sentiment Treebank (Socher et al., 2013) is a collection of English movie reviews with human annotations of their sentiment.
The model is tasked with predicting a review's sentiment as positive or negative.
QQP: Quora Question Pairs (Iyer et al., 2017)
is a collection of English question pairs from the website Quora where the task is to determine if a pair of questions are similar in meaning.
RTE: Recognizing Textual Entailment (Dagan et al., 2006) combines several English datasets from annual textual entailment challenges, where the task is to predict if the *text* entails the *hypothesis* or not.
## A.2 Hyperparameters
For each experiment, we consider a limited hyperparameter sweep for each model, selection method, and task, with batch size ∈ {16, 32} and learning rate ∈ {10−5, 3 × 10−5}. The rest of the hyperparameters are kept consistent across experiment conditions. We report the mean development set accuracy from five random initializations for which we fine-tune for 10 epochs and select the model checkpoint with the highest development set accuracy. Results from each hyperparameter sweep are reported in Table 4 and Table 5.
## B Additional Results B.1 Additional Data Selection Analysis
While we compare directly with baseline selection methods that directly measure estimated data contribution, we perform an additional analysis by comparing the indices removed with TSDSHAPLEY with the mapped training dynamics using data maps (Swayamdipta et al., 2020). Specifically, we first plot the data map for RoBERTa trained on RTE using the same hyperparameters as in subsection 4.2. Then, we plot the same data map showing only the data points that were identified by TS-DSHAPLEY to be harmful, i.e. removed from the fine-tuning training data. These are shown in Figure 2 and Figure 3, respectively.
We observe that a handful of instances in the hard-to-learn region (identified by Swayamdipta et al. (2020) to contain some mislabeled examples) were removed, as well as a small number of instances in the ambiguous region. Interestingly though, we observe that 1) most of the data points in RTE belonged to the easy-to-learn region, and 2) a cluster of easy-to-learn points were removed. Swayamdipta et al. (2020) found that too many easy-to-learn instances could decrease both in-distribution and out-of-distribution performance and noted that determining how to select an optimal balance of easy-to-learn and ambiguous examples, particularly in low data settings, was an open problem. As TS-DSHAPLEY achieved a performance gain over the full dataset performance, these results suggest that TS-DSHAPLEY may be effective to potentially determine an optimal balance and address this problem. We leave further analysis of this to future work.
## B.2 Sampling Hyperparameter Analysis.
Pearson's correlation coefficients for the sampling parameter analysis in section 4 are reported in Table 6 and Table 7, where each result represents the mean of five sampling and chain computation trials.
| RoBERTa DistilBERT |
|----------------------|
| Dataset | GLUE Task Category | Task | Metric | Data Split | |
|-----------|---------------------------------|------------|----------|--------------|-------|
| Train | Dev | | | | |
| SST-2 | Single Sentence Tasks | Sentiment | Acc. | 67k | 1.8k |
| QQP | Similarity and Paraphrase Tasks | Paraphrase | Acc./F1 | 364k | 40.4k |
| RTE | Inference Tasks | NLI | Acc. | 2.5k | 277 |
Table 3: Statistics for each dataset. We use the train and development data splits as GLUE tasks have held out test set labels.
Table 4: Batch size (BS) and learning rate (LR) for the data selection experiment based on the hyperparameter sweep defined in section 4.
Model Embeddings **SST-2 QQP RTE**
BS LR BS LR BS LR
| Model | Method | SST-2 | QQP | RTE | | |
|---------------|----------|----------|-------|----------|----|----------|
| BS | LR | BS | LR | BS | LR | |
| Full Dataset | 16 | 10−5 | 32 | 3 × 10−5 | 16 | 3 × 10−5 |
| Leave-One-Out | 32 | 10−5 | - | - | 16 | 3 × 10−5 |
| KNN Shapley | 16 | 10−5 | 32 | 3 × 10−5 | 16 | 3 × 10−5 |
| Random | 32 | 3 × 10−5 | 32 | 3 × 10−5 | 16 | 3 × 10−5 |
| TS-DSHAPLEY | 32 | 10−5 | 32 | 3 × 10−5 | 16 | 3 × 10−5 |
| Full Dataset | 16 | 10−5 | 32 | 3 × 10−5 | 32 | 3 × 10−5 |
| Leave-One-Out | 32 | 10−5 | - | - | 16 | 10−5 |
| KNN Shapley | 16 | 10−5 | 32 | 3 × 10−5 | 16 | 10−5 |
| Random | 32 | 3 × 10−5 | 16 | 3 × 10−5 | 16 | 3 × 10−5 |
| TS-DSHAPLEY | 16 | 3 × 10−5 | 16 | 10−5 | 16 | 3 × 10−5 |
RoBERTa
RoBERTa 32 10−532 3 × 10−516 3 × 10−5
DistilBERT 16 10−532 10−516 3 × 10−5
GloVe 16 3 × 10−532 3 × 10−532 3 × 10−5
DistilBERT
DistilBERT 16 10−516 10−516 3 × 10−5
RoBERTa 32 10−532 10−532 10−5
GloVe 32 10−532 3 × 10−532 3 × 10−5
Table 5: Batch size (BS) and learning rate (LR) for the embeddings switch experiment based on the hyperparameter sweep defined in section 4.
Table 6: Correlations between number of chains and performance for each subset size on the RTE dataset.
Table 7: Correlations between subset size and performance for each number of sampling chains on the RTE dataset.
| Model | Subset Size (%, #) | | | | |
|------------|----------------------|---------|----------|----------|-------|
| 1 (25) | 2 (50) | 5 (125) | 10 (249) | 15 (374) | |
| RoBERTa | 0.119 | 0.013 | 0.892 | 0.929 | 0.942 |
| DistilBERT | 0.240 | 0.104 | 0.613 | 0.776 | 0.714 |
| Model | Number of Sampling Chains | | | | | |
|------------|-----------------------------|--------|--------|-------|-------|-------|
| 2 | 5 | 10 | 15 | 20 | 25 | |
| RoBERTa | -0.463 | 0.127 | -0.474 | 0.013 | 0.472 | 0.763 |
| DistilBERT | 0.027 | -0.034 | 0.530 | 0.447 | 0.737 | 0.692 |
![8_image_0.png](8_image_0.png)
confidence
S
![9_image_0.png](9_image_0.png)
|
yoshimi-etal-2023-distractor | Distractor Generation for Fill-in-the-Blank Exercises by Question Type | https://aclanthology.org/2023.acl-srw.38 | This study addresses the automatic generation of distractors for English fill-in-the-blank exercises in the entrance examinations for Japanese universities. While previous studies applied the same method to all questions, actual entrance examinations have multiple question types that reflect the purpose of the questions. Therefore, we define three types of questions (grammar, function word, and context) and propose a method to generate distractors according to the characteristics of each question type. Experimental results on 500 actual questions show the effectiveness of the proposed method for both automatic and manual evaluation. | # Distractor Generation For Fill-In-The-Blank Exercises By Question Type
Nana Yoshimi1, Tomoyuki Kajiwara1, Satoru Uchida2, Yuki Arase3**, Takashi Ninomiya**1 1Ehime University, 2Kyushu University, 3Osaka University
{yoshimi@ai., kajiwara@, ninomiya@}cs.ehime-u.ac.jp [email protected], [email protected]
## Abstract
Jeff didn't accept the job offer because of the ____ salary.
(a) low (b) weak (c) cheap (d) inexpensive This study addresses the automatic generation of distractors for English fill-in-the-blank exercises in the entrance examinations for Japanese universities. While previous studies applied the same method to all questions, actual entrance examinations have multiple question types that reflect the purpose of the questions. Therefore, we define three types of questions (grammar, function word, and context) and propose a method to generate distractors according to the characteristics of each question type. Experimental results on 500 actual questions show the effectiveness of the proposed method for both automatic and manual evaluation.
## 1 Introduction
Fill-in-the-blank questions, also known as cloze tests (Taylor, 1953), are one way to assess learners' English proficiency and are widely used in examinations such as TOEIC1and in school education. As shown in Figure 1, the question format generally consists of a four-choice option with one correct answer and three distractors. These require substantial costs because they are manually created by question writers with extensive language teaching experience. This study automatically generates distractors to reduce workload.
Most of the previous studies on the automatic generation of cloze tests (Mitkov and Ha, 2003; Sumita et al., 2005; Zesch and Melamud, 2014; Jiang and Lee, 2017; Susanti et al., 2018; Panda et al., 2022) have generated words that are semantically similar to the correct words as distractors.
Other methods have been proposed, such as those based on co-occurrence with words in the carrier sentence (Liu et al., 2005; Hill and Simha, 2016),
considering the whole context (Yeung et al., 2019),
and considering the learner's error tendencies (Sakaguchi et al., 2013). However, these previous studies apply the same method to all questions, which 1https://www.ets.org/toeic.html Figure 1: Example of English fill-in-the-blank question.
(National Center Test for University Admissions, 2018)2 It was certainly _ crowded than I thought it would be.
(a) less (b) little (c) least (d) fewer
((a) is correct )
leads to bias in the characteristics of the generated distractors. Actual entrance examinations have multiple question types reflecting the purpose of the questions, such as grammatical knowledge and idiomatic expressions. Existing methods have difficulty in flexibly changing the characteristics of distractors for each question type.
In this study, we first manually classify English fill-in-the-blank questions in the entrance examinations for Japanese universities2 by an expert.
Next, we propose a method for automatic distractor generation according to the characteristics of each question type. Experimental results on 500 actual questions show the effectiveness of the proposed method for both automatic and manual evaluation.
## 2 Related Work
Previous studies have generated distractors in the following three steps: (1) candidate generation,
(2) reranking, and (3) filtering.
Jiang and Lee (2017) utilized cosine similarity with word embeddings (Mikolov et al., 2013) to identify candidate words that are semantically similar to the correct word. These candidate words were ranked by similarity and filtered by word 3gram. That is, if a 3-gram containing a candidate word appears in Wikipedia, that candidate is excluded. It filters out expressions that are actually used in a large-scale corpus to exclude appropriate examples from the distractor candidates.
Yeung et al. (2019) reranked the candidates generated from word embeddings by the mask-filling 2https://jcshop.jp/SHOP/18149/list.
html
| Carrier sentence | Correct | Distractors | Type | | |
|-------------------------------------------------------------|-------------|---------------|---------|------|---------------|
| I hear that one of his three sisters __ four movies a week. | sees | seeing | seen | see | grammar |
| My mother was surprised __ the news that I passed the test. | at | to | for | in | function word |
| When you exercise, you should wear __ and loose clothing. | comfortable | delicate | serious | flat | context |
Table 1: Examples of question types. From top to bottom, the sources2are (Toyo University, 2018), (Meijo University, 2017), (Nakamura Gakuen University, 2018).
probability with BERT (Devlin et al., 2019). They also utilize BERT for filtering, eliminating candidates with too high and too low probabilities. Panda et al. (2022) proposed candidate generation based on round-trip machine translation. That is, the carrier sentence was first translated into a pivot language and back-translated into English.
Then, word alignment was used to obtain a candidate for the correct word and its corresponding word. These candidates were reranked using word embeddings and filtered by WordNet (Miller, 1995).
Specifically, synonyms of the correct word in WordNet and words with a different part of speech from the correct word were excluded from the candidates.
These existing methods have been evaluated in different ways on different datasets, making it difficult to compare their performance. We have comprehensively evaluated them and propose further improvements on top of their combinations.
## 3 Definition Of Question Types
An experienced English teacher specializing in English education has categorized the question types for English fill-in-the-blank questions. The analysis covers 500 randomly selected questions from the entrance examinations for Japanese universities in the five-year period from 2017 to 2021. As shown in Table 1, the following three question types were defined:
- **Grammar**: Questions that mainly use the conjugated form of the same word as choices.
- **Function word**: Questions that are choices from a prescribed list of function words.
- **Context**: Questions with choices determined by context or idiomatic expressions.
Table 2 shows the number of occurrences for each question type. Approximately half of the questions were on context, 40% were on function word, and 10% were on grammar. In the next section, we
| Question type | Number of questions |
|-----------------|-----------------------|
| Grammar | 66 (13.2%) |
| Function word | 195 (39.0%) |
| Context | 239 (47.8%) |
Table 2: Statistics of question types.
propose how to generate distractors according to the characteristics of each question type.
## 4 Generating Distractors
Following previous studies (Jiang and Lee, 2017; Yeung et al., 2019; Panda et al., 2022), we also generate distractors through three steps. For candidate generation and reranking, we selected combinations of the existing methods described in Section 2 that maximize performance on the validation dataset3for each question type. For filtering, we propose methods according to the characteristics of each question type, which are described below.
## 4.1 Filtering For Questions On Grammar
For questions on grammar, the conjugated forms of the correct word should be obtained as candidates.
Therefore, we apply POS filtering. That is, we exclude candidates that have the same part of speech or the same conjugation as the correct word.
Furthermore, to avoid unreliable distractors that could be the correct answer, we exclude candidates with a high mask-filling probability by BERT (Devlin et al., 2019). Unlike Yeung et al. (2019), called BERT (static), which used two fixed thresholds to select the top θH to θL, our filter, called BERT
(dynamic), dynamically changes the thresholds.
Specifically, we exclude candidates that have a higher probability than the correct word. The example of the first sentence in Table 1 shows that
"thinks" is eliminated as a candidate for the same 3For the validation dataset, 500 questions were randomly selected in addition to the evaluation dataset annotated in Section 3. These questions were automatically annotated with question types by BERT (Devlin et al., 2019). The accuracy of BERT was 84.8% in the 10-fold cross-validation.
Type Method Candidate Reranking Filtering k = 3 k = 5 k = 10 k = 20
Grammar
Jiang-2017 fastText fastText Word 3-gram 24.7 21.6 **17.7 11.2**
Yeung-2019 fastText BERT BERT (static) 1.5 1.9 3.0 3.4 Panda-2022 Round-trip fastText WordNet 8.6 8.3 5.6 3.6
Ours fastText fastText POS+BERT (dynamic) **27.8 25.0** 17.0 10.4
Function word
Jiang-2017 fastText fastText Word 3-gram 10.3 12.1 11.8 9.3
Yeung-2019 fastText BERT BERT (static) 6.3 7.1 7.3 5.7
Panda-2022 Round-trip fastText WordNet 15.9 16.7 13.1 7.8
Ours Round-trip BERT List of function words **19.1 22.2 21.1 13.2**
Context
Jiang-2017 fastText fastText Word 3-gram 2.2 2.9 3.7 3.2
Yeung-2019 fastText BERT BERT (static) 1.8 2.0 2.3 2.7
Panda-2022 Round-trip fastText WordNet 4.2 5.1 4.6 3.2
Ours Round-trip fastText BERT (dynamic) 3.8 **5.3 5.8 4.4**
Table 3: Results of automatic evaluation of generated distractors by F1-score.
part of speech, and "watches" is eliminated as a high probability candidate.
## 4.2 Filtering For Questions On Function Word
For questions on function words, only function words such as prepositions and conjunctions are basically used as choices. Therefore, we utilize the list of function words4for entrance examinations for Japanese universities to exclude candidates not included in this list. The example of the second sentence in Table 1 shows that "time" and "taken" are eliminated.
## 4.3 Filtering For Questions On Context
Since the questions on context are designed to test knowledge of collocations or idioms, candidates should be obtained for words that often co-occur with surrounding words in the carrier sentence.
However, as with questions on grammar, to avoid unreliable distractors, candidates with a high maskfilling probability by BERT are excluded. The example of the third sentence in Table 1 shows that
"comfy" and "cosy" are eliminated.
## 5 Experiments
We evaluate the method of distractor generation on the 500 questions constructed in Section 3.
## 5.1 Setting
Implementation Details For candidate generation, we implemented methods based on word embeddings (Jiang and Lee, 2017) and round-trip machine translation (Panda et al., 2022). We utilized 4https://ja.wikibooks.org/wiki/大 学 受 験 英語_英単語/機能語・機能型単語一覧 fastText (Bojanowski et al., 2017) as word embeddings and Transformer (Vaswani et al., 2017),
trained on English-German language pairs5(Ng et al., 2019; Ott et al., 2019) according to the previous study (Panda et al., 2022), as machine translators. For word alignment, we used Hungarian matching (Kuhn, 1955) based on word embeddings (Song and Roth, 2015).
For reranking, we implemented methods based on word embeddings (Jiang and Lee, 2017) and BERT (Yeung et al., 2019). We utilized BERTbase-uncased (Devlin et al., 2019) via HuggingFace Transformers (Wolf et al., 2020). Note that the candidate words are restricted to the intersection of the vocabulary of fastText and BERT.
For filtering, NLTK (Bird and Loper, 2004) was used for pos tagging. We used 166 function words.4 Comparative Methods We compared the proposed method with three existing methods described in Section 2: methods based on word embeddings (Jiang and Lee, 2017), masked language models (Yeung et al., 2019), and round-trip machine translations (Panda et al., 2022). For word 3-gram filtering, we used preprocessed English Wikipedia (Guo et al., 2020). For BERT (static) filtering, we used thresholds of θH = 11 and θL = 39 following Yeung et al. (2019).
Automatic Evaluation To evaluate whether the generated distractors are matched with the actual entrance examinations, an automatic evaluation is performed. We generated 100 words of candidates for each method and compared the top
| Carrier sentence : There are three people __ school events. Question type : Grammar Correct answer : discussing Distractors : discuss | discussed | discusses | | | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|------------|------------|--------------|
| (Jiang and Lee, 2017) | debating | talking | discussion | commenting | mentioning | discuss | examining |
| (Yeung et al., 2019) | creating | talking | considering | promoting | deciding | initiating | exploring |
| (Panda et al., 2022) | talking | dealing | speaking | working | reporting | giving | wednesday |
| Proposed Method | discussion | discuss | discussed | discussions | discusses | about | conversation |
| Carrier sentence : They are a little worried __ their daughter's trip to the Amazon. Question type : Function word Correct answer : about Distractors : for with | from | | | | | | |
| (Jiang and Lee, 2017) | concerning | regarding | relating | talking | what | telling | pertaining |
| (Yeung et al., 2019) | considering | up | the | seeing | than | just | discussing |
| (Panda et al., 2022) | the | any | and | afraid | affected | anxious | at |
| Proposed Method | by | after | for | at | from | with | of |
Table 4: Examples of generated distractors. The example in the upper row is from (Ritsumeikan University, 2019),2 and the example in the lower row is from (Morinomiya University of Medical Sciences, 2018).2 Candidates matching the gold distractors are highlighted in bold.
k ∈ {3, 5, 10, 20} words, after reranking and filtering, to the three gold distractors. Note that if there are fewer than k candidates, the remainder were randomly selected from the vocabulary. We employed the F1-score as the evaluation metric.
Manual Evaluation To assess the correlation of examinee performance between the generated questions and the actual entrance examinations, a manual evaluation is performed. First, distractors are generated for each of the 60 randomly selected questions in each of the proposed and two comparative methods (Jiang and Lee, 2017; Panda et al.,
2022). Next, ten university students, who are native Japanese speakers, took 100 English fill-in-theblank questions from the actual entrance examinations, as well as these 180 generated questions.
Note that these questions are sampled evenly by question type, with no duplication. Finally, we calculated the correlation of accuracy between the generated and actual questions.
## 5.2 Results
Automatic Evaluation Table 3 shows the results of the automatic evaluation. The top three rows show the performance of the comparison method and the bottom row shows the performance of the proposed method for each question type. The proposed method achieved the best performance in 9 out of 12 settings and the second best performance in the remaining 3 settings. This implies the effectiveness of filtering according to the characteristics of question types. The improvement in performance was particularly noticeable for questions on function words, with greater improvement as the number of candidates k increased.
Table 5: Correlation of accuracy between actual entrance examinations and generated questions.
| Method | Pearson | Spearman | Kendall |
|-----------------------|-----------|------------|-----------|
| (Jiang and Lee, 2017) | 0.739 | 0.723 | 0.584 |
| (Panda et al., 2022) | 0.776 | 0.774 | 0.614 |
| Proposed Method | 0.903 | 0.802 | 0.629 |
Manual Evaluation Table 5 shows the results of the manual evaluation. The proposed method has the highest correlation with the performance of the actual entrance examinations for all correlation coefficients. This means that the proposed method is most effective in identifying the English proficiency of examinees.
Output Examples Table 4 shows examples of generated distractors. In questions on grammar, existing methods without consideration of question types generate candidates that are semantically close to the correct word, but the proposed method correctly generates conjugated forms of the correct word. In questions on function words, the existing methods include candidates other than function words, but the proposed method generates only function words, correctly ranking the gold distractors higher. In questions on context, as shown in Table 3, the proposed method is not much different from the existing method until the top five, but may be followed by good candidates even after that.
## 6 Conclusion
To reduce the cost of creating English fill-inthe-blank questions in entrance examinations for Japanese universities, this study addressed automatic distractor generation. First, we identified three question types and constructed a fill-in-theblank corpus annotated by an expert with those question types. Next, we proposed methods to generate distractors that take into account the characteristics of each question type, focusing on candidate filtering. Experimental results based on automatic and manual evaluations demonstrate the effectiveness of the proposed method. Specifically, our method is able to generate candidates that match the gold distractors better than existing methods and has the highest correlation with the examinees' English proficiency as assessed in actual entrance examinations. For future work, we plan to expand the corpus size by estimating question types, to generate distractors by supervised learning.
## Acknowledgements
We thank anonymous reviewers for valuable comments and suggestions. This work was supported by JSPS KAKENHI Grant Number JP21H03564 and JP22H00677.
## References
Steven Bird and Edward Loper. 2004. NLTK: The Natural Language Toolkit. In Proceedings of the ACL
Interactive Poster and Demonstration Sessions, pages 214–217.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. *Transactions of the Association for Computational Linguistics*, 5:135–146.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186.
Mandy Guo, Zihang Dai, Denny Vrandeciˇ c, and Rami ´
Al-Rfou. 2020. Wiki-40B: Multilingual Language Model dataset. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 2440–2452.
Jennifer Hill and Rahul Simha. 2016. Automatic Generation of Context-Based Fill-in-the-Blank Exercises Using Co-occurrence Likelihoods and Google ngrams. In *Proceedings of the 11th Workshop on* Innovative Use of NLP for Building Educational Applications, pages 23–30.
Shu Jiang and John Lee. 2017. Distractor Generation for Chinese Fill-in-the-blank Items. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 143–148.
Harold W. Kuhn. 1955. The Hungarian Method for the Assignment Problem. Naval Research Logistics Quarterly, 2(1-2):83–97.
Chao-Lin Liu, Chun-Hung Wang, Zhao-Ming Gao, and Shang-Ming Huang. 2005. Applications of Lexical Information for Algorithmically Composing Multiple-Choice Cloze Items. In Proceedings of the Second Workshop on Building Educational Applications Using NLP, pages 1–8.
Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In Proceedings of the 1st International Conference on Learning Representations.
George A. Miller. 1995. WordNet: A Lexical Database for English. *Communications of the ACM*, 38(11):39–
41.
Ruslan Mitkov and Le An Ha. 2003. Computer-Aided Generation of Multiple-Choice Tests. In *Proceedings of the HLT-NAACL 03 Workshop on Building* Educational Applications Using Natural Language Processing, page 17–22.
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 News Translation Task Submission. In *Proceedings of the Fourth Conference on Machine* Translation, pages 314–319.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53.
Subhadarshi Panda, Frank Palma Gomez, Michael Flor, and Alla Rozovskaya. 2022. Automatic Generation of Distractors for Fill-in-the-Blank Exercises with Round-Trip Neural Machine Translation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research* Workshop, pages 391–401.
Keisuke Sakaguchi, Yuki Arase, and Mamoru Komachi.
2013. Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners. In *Proceedings of the 51st Annual Meeting of the Association* for Computational Linguistics, pages 238–242.
Yangqiu Song and Dan Roth. 2015. Unsupervised Sparse Vector Densification for Short Text Similarity.
In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1275–1280.
Eiichiro Sumita, Fumiaki Sugaya, and Seiichi Yamamoto. 2005. Measuring Non-native Speakers' Proficiency of English by Using a Test with
Automatically-Generated Fill-in-the-Blank Questions. In Proceedings of the Second Workshop on Building Educational Applications Using NLP, pages 61–68.
Yunik Susanti, Takenobu Tokunaga, Hitoshi Nishikawa, and Hiroyuki Obari. 2018. Automatic Distractor Generation for Multiple-choice English Vocabulary Questions. *Research and Practice in Technology* Enhanced Learning, 13(15):1–16.
Wilson L Taylor. 1953. "Cloze Procedure": A New Tool for Measuring Readability. *Journalism quarterly*, 30(42):415–433.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6000–6010.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers:
State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45.
Chak Yan Yeung, John Lee, and Benjamin Tsou. 2019.
Difficulty-aware Distractor Generation for Gap-Fill Items. In *Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association*, pages 159–164.
Torsten Zesch and Oren Melamud. 2014. Automatic Generation of Challenging Distractors Using ContextSensitive Inference Rules. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications, pages 143–148. |
simmons-2023-moral | Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity | https://aclanthology.org/2023.acl-srw.40 | Large Language Models (LLMs) have demonstrated impressive capabilities in generating fluent text, as well as tendencies to reproduce undesirable social biases. This work investigates whether LLMs reproduce the moral biases associated with political groups in the United States, an instance of a broader capability herein termed moral mimicry. This work explores this hypothesis in the GPT-3/3.5 and OPT families of Transformer-based LLMs. Using tools from Moral Foundations Theory, this work shows that these LLMs are indeed moral mimics. When prompted with a liberal or conservative political identity, the models generate text reflecting corresponding moral biases. This study also explores the relationship between moral mimicry and model size, and similarity between human and LLM moral word use. | # Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored To Political Identity
## Gabriel Simmons
UC Davis [email protected]
## Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in generating fluent text, as well as tendencies to reproduce undesirable social biases. This study investigates whether LLMs reproduce the moral biases associated with political groups in the United States, an instance of a broader capability herein termed *moral mimicry*. This hypothesis is explored in the GPT-3/3.5 and OPT families of Transformer-based LLMs. Using tools from Moral Foundations Theory, it is shown that these LLMs are indeed moral mimics. When prompted with a liberal or conservative political identity, the models generate text reflecting corresponding moral biases. This study also explores the relationship between moral mimicry and model size, and similarity between human and LLM moral word use.
## 1 Introduction
Recent work suggests that Large Language Model
(LLM) performance will continue to scale with model and training data sizes (Kaplan et al., 2020).
As LLMs advance in capability, it becomes more likely that they will be capable of producing text that influences human opinions (Tiku, 2022), potentially lowering barriers to disinformation (Weidinger et al., 2022). More optimistically, LLMs may play a role in bridging divides between social groups (Alshomary and Wachsmuth, 2021; Jiang et al., 2022). For better or worse, we should understand how LLM-generated content will impact the human informational environment - whether this content is influential, and to whom.
Morality is an important factor in persuasiveness and polarization of human opinions (Luttrell et al.,
2019). Moral argumentation can modulate willingness to compromise (Kodapanakkal et al., 2022),
and moral congruence between participants in a dialogue influences argument effectiveness (Feinberg and Willer, 2015) and perceptions of ethicality
(Egorov et al., 2020).
Therefore, it is important to characterize the capabilities of LLMs to produce apparently-moral content1. This requires a framework from which we can study morality; Moral Foundations Theory (MFT) is one such framework. MFT proposes that human morals rely on five foundations:
Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation2.
Evidence from MFT supports the "Moral Foundations Hypothesis" that political groups in the United States vary in their foundation use - liberals rely primarily on the individualizing foundations (Care/Harm and Fairness/Cheating), while conservatives make more balanced appeals to all 5 foundations, appealing to the binding foundations
(Authority/Subversion, Sanctity/Degradation, and Loyalty/Betrayal) more than liberals (Graham et al.,
2009; Dogruyol et al. ˘ , 2019; Frimer, 2020).
Existing work has investigated the moral foundational biases of language models that have been fine-tuned on supervised data (Fraser et al., 2022),
investigated whether language models reproduce other social biases (see (Weidinger et al., 2022)
section 2.1.1), and probed LLMs for differences in other cultural values (Arora et al., 2023). Concurrent work has shown that LLMs used as dialog agents tend to repeat users' political views back to them, and that this happens more frequently in larger models (Perez et al., 2022). To my knowledge, no work yet examines whether language models can perform *moral mimicry* - that is, reproduce the moral foundational biases associated with social 1Anthropomorphization provides convenient ways to talk about system behavior, but can also distort perception of underlying mechanisms (Bender and Koller, 2020). To be clear, I ascribe capabilities such as "moral argumentation" or "moral congruence" to language models only to the extent that their outputs may be perceived as such, and make no claim that LLMs might generate such text with communicative intent.
2Liberty/Oppression was proposed as a sixth foundation
- for the sake of this analysis I consider only the original 5 foundations, as these are the ones available in the Moral Foundations Dictionaries (Graham et al., 2009; Frimer, 2019; Hopp et al., 2021).
282
![1_image_0.png](1_image_0.png)
groups such as political identities.
The present study considers whether LLMs use moral vocabulary in ways that are situationallyappropriate, and how this compares to human foundation use. I find that LLMs respond to the salient moral attributes of scenario descriptions, increasing their use of the appropriate foundations, but still differ from human consensus foundation use more than individual humans (Section 2.1). I then turn to the moral mimicry phenomenon. I investigate whether conditioning an LLM with a political "identity" influences the model's use of moral foundations in ways that are consistent with human moral biases. I find confirmatory results for text generated based on"liberal" and "conservative" political identities (Section 2.2). Finally, I ask how the moral mimicry phenomenon varies with model size.
Results show that the extent to which LLMs can reproduce moral biases increases with model size, in the OPT family (Section 2.2). This is also true for the GPT-3 and -3.5 models considered together, and to a lesser extent for the GPT-3 models alone.
## 2 Methods
Data Generation All experiments follow the same pattern for data generation, described in the following sections and illustrated in Figure 1. Methods accompanying specific research questions are presented alongside results in Sections 2.1 - 2.3.
Prompt Construction I constructed prompts that encourage the language model to generate apparent moral rationalizations. Each prompt conditions the model with three variables: a scenario s, a political identity phrase i, and a moral stance r. Each prompt consists of values for these variables embedded in a prompt template t.
Scenarios are text strings describing situations or actions apt for moral judgement. I used three datasets (Moral Stories3(Emelin et al., 2021),
ETHICS4(Hendrycks et al., 2021), and Social Chemistry 1015(Forbes et al., 2020)) to obtain four sets of scenarios, which I refer to as Moral Stories, ETHICS, Social Chemistry Actions, and Social Chemistry Situations. Appendix Section A.2 provides specifics on how each dataset was constructed. I use S and s to a set of scenarios, and a single scenario, respectively.
Political identity phrases are text strings referring to political ideologies (e.g. "liberal"). I use I
and i to refer to a set of political identities and an individual identity, respectively.
Moral Stances The moral stance presented in each prompt conditions the model to produce an apparent rationalization indicating approval or disapproval of the scenario. I use *R, r* to refer to the set of stances {moral, immoral}, and a single stance, respectively. The datasets used herein contain labels indicating the normative moral acceptability of each scenario. For a scenario s, I refer to its normative moral acceptability as rH(s).
Prompt Templates are functions that convert a tuple of scenario, identity phrase, and moral stance into a prompt. To check for sensitivity to any particular phrasing, five different styles of prompt template were used (see Appendix Tables 2 and 3).
3Downloaded from https://github.com/demelin/moral_stories 4Downloaded from https://github.com/hendrycks/ethics 5Downloaded from https://github.com/mbforbes/socialchemistry-101
Prompts were constructed by selecting a template t for a particular style, and populating it with a stance, scenario, and political identity phrase.
Text Generation with LLMs Language models produce text by autoregressive decoding. Given a sequence of tokens, the model assigns likelihoods to all tokens in its vocabulary indicating how likely they are to follow the sequence. Based on these likelihoods, a suitable next token is appended to the sequence, and the process is repeated until a maximum number of tokens is generated, or the model generates a special "end-of-sequence" token. I refer to the text provided initially to the model as a "prompt" and the text obtained through the decoding process as a "completion". In this work I used three families of Large Language Models: GPT-3, GPT-3.5, and OPT (Table 1). GPT-3 is a family of Transformer-based (Vaswani et al., 2017) autoregressive language models with sizes up to 175 billion parameters, pre-trained in self-supervised fashion on web text corpora (Radford et al., 2019).
The largest 3 of the 4 GPT-3 models evaluated here also received supervised fine-tuning on highquality model samples and human demonstrations
(OpenAI, 2022). The GPT-3.5 models are also Transformer-based, pre-trained on text and code web corpora, and fine-tuned using either supervised fine-tuning or reinforcement learning from human preferences (OpenAI, 2022). I accessed GPT-3/3.5 through the OpenAI Completions API (OpenAI,
2021). I used the engine parameter to indicate a specific model. GPT-3 models "text-ada-001", "textbabbage-001", "text-curie-001", and "text-davinci001", and GPT-3.5 models "text-davinci-002" and
"text-davinci-003" were used. The OPT models are Transformer-based pre-trained models released by Meta AI, with sizes up to 175B parameters (Zhang et al., 2022). Model sizes up to 30B parameters were used herein. OPT model weights were obtained from the HuggingFace Model Hub. I obtained completions from these models locally using the HuggingFace Transformers (Wolf et al., 2020)
and DeepSpeed ZeRo-Inference libraries (DeepSpeed, 2022), using a machine with a Threadripper 3960x CPU and two RTX3090 24GB GPUs. For all models, completions were produced with temperature=0 for reproducibility. The max_tokens parameter was used to stop generation after 64 tokens (roughly 50 words). All other settings were
## Measuring Moral Content
Moral Foundations Dictionaries I estimated the moral foundational content of each completion using three dictionaries: the Moral Foundations Dictionary version 1.0 (MFDv1) (Graham et al.,
2009), Moral Foundations Dictionary version 2.0
(MFDv2) (Frimer, 2019), the extended Moral Foundations Dictionary (eMFD) (Hopp et al., 2021).
MFDv1 consists of a lexicon containing 324 word stems, with each word stem associated to one or more categories. MFDv2 consists of a lexicon of 2014 words, with each word associated to a single category. In MFDv1, the categories consist of a
"Vice" and "Virtue" category for each of the five foundations, plus a "MoralityGeneral" category, for 11 categories in total. MFDv2 includes all categories from MFDv1 except "MoralityGeneral", for a total of 10 categories. The eMFD (Hopp et al.,
2021) contains 3270 words and differs slightly from MFDv1 and MFDv2. Words in the eMFD are associated with all foundations by scores in [0, 1].
Scores were derived from annotation of news articles, and indicate how frequently each word was associated to each foundation, divided by the total word appearances. Word overlap between the dictionaries is shown in Appendix Figure 5.
Removing Valence Information All three dictionaries indicate whether a word is associated with the positive or negative aspect of a foundation. In MFDv1 and MFDv2 this is indicated by word association to the "Vice" or "Virtue" category for each foundation. In the eMFD, each word has sentiment scores for each foundation. In this work I
was interested in the foundational contents of the completions, independent of valence. Accordingly,
"Vice" and "Virtue" categories were merged into a single category for each foundation, in both MFDv1 and MFDv2. The "MoralityGeneral" score from MFDv1 was unused as it does not indicate association with any particular foundation. Sentiment scores from eMFD were also unused.
Applying the Dictionaries Applying dictionary d to a piece of text produces five scores {wdf | f ∈
F}. For MFDv1 and MFDv2, these are integer values representing the number of foundationassociated words in the text. The eMFD produces continuous values in [0, ∞] - the foundation-wise sums of scores for all eMFD words in the text.
I am interested in the probability P that a human or language model (apparently) expresses foundation f, which I write as Ph(ef ) and PLM (ef ), respectively. I use P
d(ef |*s, r, i*) to denote this probability conditioned on a scenario s, stance r, and political identity i, using a dictionary d for measurement.
I use F to refer to the set of moral foundations, and f for a single foundation. I use D to refer to the set of dictionaries. In each dictionary, Wd refers to all words in the dictionary. For MFDv1 and MFDv2, Wdf refers to all the words in d belonging to foundation f. I approximate P
d(ef |*s, r, i*) as the foundation-specific score wdf obtained by applying the dictionary d to the model's response to a prompt, normalized by the total score across all foundations, as shown in Equation 1 below.
$$P^{d}(e_{f}|s,r,i)\approx{\frac{w_{f d}}{\sum_{f^{\prime}\in F}w_{f^{\prime}d}}}$$
$\text{Since}\;\text{Effort\_cirings}$
$$(1)$$
Calculating Effect Sizes Effect sizes capture how varying political identity alters the likelihood that the model will express foundation f, given the same stance and scenario. Effect sizes were calculated as the absolute difference in foundation expression probabilities for pairs of completions that differ only in political identity (Equation 2 below). Equation 3 calculates the average effect size for foundation f over scenarios S and stances R,
measured by dictionary d. Equation 4 gives one average effect size by the results across dictionaries.
$$\begin{array}{c}{{\Delta P_{i_{1},i_{2}}^{d}(e_{f}|s,r){=}P^{d}(e_{f}|s,i_{1},r){-}P^{d}(e_{f}|s,i_{2},r)}}\\ {{\ }}\\ {{\Delta P_{i_{1},i_{2}}^{d}(e_{f}){=}E_{s,r\in S\times R}\,\Delta P_{i_{1},i_{2}}^{d}(e_{f}|s,r)}}\\ {{\ }}\\ {{\ }}\\ {{\Delta P_{i_{1},i_{2}}(e_{f}){=}E_{d\in D}\,\Delta P_{i_{1},i_{2}}^{d}(e_{f})}}\end{array}$$
## 2.1 Llm Vs. Human Moral Foundation Use
Experiment Details This experiment considers whether LLMs use foundation words that are situationally appropriate7. LLMs would satisfy a weak criterion for this capability if they were more likely to express foundation f in response to scenarios where foundation f is salient, compared to their average use of f across a corpus of scenarios containing all foundations in equal proportion. I formalize this with Criterion A below.
Criterion A Average use of foundation f is greater across scenarios Sf that demonstrate only 7e.g. using the Care/Harm foundation when prompted with a violent scenario foundation f, in comparison to average use of foundation f across a foundationally-balanced corpus of scenarios S (Equation 5).
Esf
,r∈Sf ×R PLM(ef |sf ,r)>Es,r∈S×R PLM(ef |s,r)
A stronger criterion would require LLMs to not to deviate from human foundation use beyond some level of variation that is expected among humans. I
formalize this with Criterion 2b below.
Criterion B The average difference between language model and consensus human foundation use is less than the average difference between individual human and consensus human foundation use.
$$({\mathfrak{H}})$$
DIFFLM,CH≤DIFFH,CH(5) DIFFLM,CH =Es∈S[|PLM(ef |s,rH(s))−CH(s)|] (6) DIFFH,CH =Es∈S[EH[|Ph(ef |s)−CH(s)|]] (7) CH(s)=Eh[Ph(ef |s)] (8)
(6) (7) (8) (1) $\frac{1}{2}$
Stance rHs is the normative moral acceptability of scenario s - the human-written rationalizations are "conditioned" on human normative stance for each scenario, so I only compare these with model outputs that are also conditioned on human normative stance.
Criterion A requires a corpus with ground-truth knowledge that only a particular foundation f is salient for each scenario. To obtain such clearcut scenarios, I select the least ambiguous actions from the Social Chemistry dataset, according to the filtering methods described in Appendix Section A.2.3. Estimating human consensus foundation use (Criterion B) requires a corpus of scenarios that are each annotated in open-ended fashion by multiple humans. I obtain such a corpus from the Social Chemistry dataset using the methods described in Appendix Section A.2.4.
## Results
Figure 2 (left) shows average values of P(ef |s)
for each foundation. For all five foundations, the model increases its apparent use of foundationassociated words appropriate to the ground truth foundation label, satisfying Criterion A. Figure 2 (right) shows LM differences from human consensus |PLM (ef |*s, r*Hs) − CH(s)| obtained from the text-davinci-002 model, and human differences from human consensus EH [|Ph(ef |s) − CH(s)|],
on the Social Chemistry Situations dataset. In general the LM-human differences are greater than the human-human differences.
![4_image_0.png](4_image_0.png)
## 2.2 Are Llms Moral Mimics?
Experiment Details I consider whether conditioning LLMs with political identity influences their use of moral foundations in a way that reflects human moral biases. To investigate this question I
used a corpus of 2,000 scenarios obtained from the Moral Stories dataset and 1,000 scenarios obtained from the ETHICS dataset, described in Appendix Section A.2.
Prompts were constructed with template style 2 from table 2. For each scenario, four prompts were constructed based on combinations of "liberal" and "conservative" political identity and moral and immoral stance, for a total of 12,000 prompts.
Completions were obtained from the most capable model in each family that our computational resources afforded: text-davinci-001 (GPT-3), textdavinci-002 and text-davinci-003 (GPT-3.5) and OPT-30B. One generation was obtained from each model for each prompt. I calculated average effect size ∆Pi1,i2
(ef ) with i1 = "liberal" and i2 = "conservative" for all five foundations. Effect sizes were computed separately for each dictionary, for a total of 18,000 effect sizes computed per model.
Results Figure 3 shows effect sizes for liberal vs. conservative political identity, for the most capable models tested from the OPT, GPT, and GPT-3.5 model families, measured using the three moral foundations dictionaries. The shaded regions in each plot represent the effects that would be expected based on the Moral Foundations Hypothesis
- namely that prompting with liberal political identity would result in more use of the individualizing foundations (positive ∆Pi1,i2
) and prompting with conservative political identity would result in more use of the binding foundations (negative ∆Pi1,i2
).
The majority of effect sizes coincide with the Moral Foundations Hypothesis. Of 60 combinations of 5 foundations, 4 models, and 3 dictionaries, only 11 effect sizes are in the opposite direction from expected, and all of these effect sizes have magnitude of less than 1 point absolute difference.
## 2.3 Is Moral Mimicry Affected By Model Size?
Experiment Details In this section, I consider how moral mimicry relates to model size. I used text-ada-001, text-babbage-001, text-curie-001, and text-davinci-001 models from the GPT-3 family, text-davinci-002 and text-davinci-003 from the GPT-3.5 family (OpenAI, 2022), and OPT-350m, OPT-1.3B, OPT-6.7B, OPT-13B, and OPT-30B
(Zhang et al., 2022). The GPT-3 models have estimated parameter counts of 350M, 1.3B, 6.7B
and 175B, respectively (OpenAI, 2022; Gao, 2021).
Text-davinci-002 and text-davinci-003 also have 175B parameters (OpenAI, 2022). Parameters in billions for the OPT models are indicated in the model names.
To analyze to what extent each model demonstrates the moral mimicry phenomenon, I define a scoring function MFH-SCORE that scores a model m as follows:
$$\mathrm{sign}_{MFH}{=}\begin{cases}-1,&\mathrm{if}f\in\{\mathrm{A/S,S/D,L/B}\}\\ \\ +1,&\mathrm{if}f\in\{\mathrm{C/H,F/C}\}\end{cases}\tag{10}$$
A/S: Authority/Subversion; S/D: Sanctity/Degradation; L/B: Loyalty/Betrayal; C/H: Care/Harm; F/C; Fairness/Cheating The MFH-SCORE calculates the average effect size for each model in the direction predicted by the Moral Foundations Hypothesis.
Results Figure 4 above shows effect sizes ∆(Pef
)
for each foundation and MFH-SCOREs vs. model size (number of parameters). Effect sizes are averaged over the three moral foundations dictionaries.
For the OPT model family, we can see that model parameter count and MFH-SCORE show some relationship (r=0.69, although statistical power is lim-
![5_image_0.png](5_image_0.png)
ited due to the limited number of models). In particular, the Sanctity/Degradation foundation maintains a non-zero effect size in the expected direction for all models 6.7B parameters or larger. Surprisingly, OPT-13B shows decreased effect sizes for Fairness/Cheating and Care/Harm in comparison to the smaller OPT-6.7B. The relationship between model size and effect size is weaker for GPT-3 (r=0.23). Care/Harm, Fairness/Cheating, Sanctity/Degradation, and Authority/Subversion have effect size in the expected direction for Babbage, Curie, and DaVinci models, though the effect sizes are smaller than for the OPT family. Models from the GPT-3.5 family show the largest effect sizes overall. Unfortunately, no smaller model sizes are available for this family. If we include the GPT-3 and GPT-3.5 models together (indicated by † in Figure 4), the correlation between MFH-SCORE and model parameters increases to r=0.84. Interestingly, the OPT and GPT-3 families show Sanctity/Degradation as the most pronounced effect size for conservative prompting, and Fairness/Cheating as the most pronounced effect size for liberal prompting. GPT-3.5 instead shows the largest effect sizes for Authority/Subversion and Care/Harm, respectively.
## 3 Discussion
Section 2.1 posed two criteria to judge whether LLMs use moral foundations appropriately. For the weaker Criterion A, results show that LLMs do increase use of foundation words relevant to the foundation that is salient in a given scenario, at least for scenarios with clear human consensus on foundation salience. However, for Criterion B,
results show that LLMs differ more from human consensus foundation use than humans do in terms of foundation use.
Section 2.2 compared LM foundation use with findings from moral psychology that identify differences in the moral foundations used by liberal and conservative political groups. Specifically, according to the Moral Foundations Hypothesis, liberals rely mostly on the Care/Harm and Fairness/Cheating foundations, while conservatives use all 5 foundations more evenly, using Authority/Subversion, Loyalty/Betrayal, and Fairness/Cheating more than liberals. This finding was first presented in (Graham et al., 2009), and has since been supported with confirmatory factor analysis in (Dogruyol et al. ˘ , 2019), and partially replicated (though with smaller effect sizes) in (Frimer, 2020).
Results indicate that models from the GPT-3, GPT-3.5 and OPT model families are more likely to use the binding foundations when prompted with conservative political identity, and are more likely to use the individualizing foundations when prompted with liberal political identity. Emphasis on individual foundations in each category differs by model family. OPT-30B shows larger effect sizes for Fairness/Cheating than Care/Harm and larger effect sizes for Sanctity/Degradation vs. Authority/Subversion, while GPT-3.5 demonstrates the opposite. I suspect that this may be due to differences in training data and/or training practices between the model families. This opens an interesting question of how to influence the moral mimicry
![6_image_0.png](6_image_0.png)
capabilities that emerge during training, via dataset curation or other methods.
The results from Section 2.3 show some relationship between moral mimicry and model size. Effect sizes tend to increase with parameter count in the OPT family, and less so in the GPT-3 family. Both 175B-parameter GPT-3.5 models show relatively strong moral mimicry capabilities, moreso than the 175B GPT-3 model text-davinci-001. This suggests that parameter count is not the only factor leading to moral mimicry. The GPT-3.5 models were trained with additional supervised fine-tuning not applied to the GPT-3 family, and used text and code pre-training rather than text alone (OpenAI, 2022).
## 4 Limitations
This work used the moral foundations dictionaries to measure the moral content of text produced by GPT-3. While studies have demonstrated correspondence between results from the dictionaries and human labels of moral foundational content
(Mutlu et al., 2020; Graham et al., 2009), dictionarybased analysis is limited in its ability to detect nuanced moral expressions. Dictionary-based analysis could be complemented with machine-learning approaches (Garten et al., 2016; Johnson and Goldwasser, 2018; Pavan et al., 2020; Roy et al., 2022)
as well as human evaluation. This study attempted to control for variations in the prompt phrasing by averaging results over several prompt styles (Tables 2 and 3). These prompt variations were chosen by the author. A more principled selection procedure could result in a more diverse set of prompts.
The human studies that this study refers to (Graham et al., 2009; Frimer, 2020) were performed on populations from the United States. The precise political connotations of the terms "liberal" and "conservative" differ across demographics. Future work may explore how language model output varies when additional demographic information is provided, or when multilingual models are used. Documentation for the datasets used herein indicates that the crowd workers leaned politically left, and morally towards the Care/Harm and Fairness/Cheating foundations (Forbes et al., 2020; Hendrycks et al., 2021; Fraser et al., 2022). However, bias in the marginal foundation distribution does not hinder the present analysis, since the present experiments experiments focus primarily on the difference in foundation use resulting from varying political identity. The analysis in Section 2.1 relies more heavily on the marginal foundation distribution; a foundationallybalanced dataset was constructed for this experiment. This study used GPT-3 (Brown et al., 2020),
GPT-3.5 (OpenAI, 2022), and OPT (Zhang et al.,
2022). Other pre-trained language model families of similar scale and architecture include BLOOM8, which I was unable to test due to compute budget, and LLaMA (Touvron et al., 2023), which was released after the experiments for this work concluded. While the OPT model weights are available for download, GPT-3 and GPT-3.5 model weights are not; this may present barriers to future work that attempts to connect the moral mimicry phenomenon to properties of the model. On the other hand, the hardware required to run openly-available models may be a barrier to experimentation that is not a concern for models hosted via an API.
Criticisms of Moral Foundations Theory include disagreements about whether a pluralist theory of morality is parsimonious (Suhler and Churchland, 2011; Dobolyi, 2016); Ch. 6 of (Haidt, 2013), disagreements about the number and character of the 8https://bigscience.huggingface.co/blog/bloom foundations (Yalçındag et al. ˘ , 2019; Harper and Rhodes, 2021), disagreements about stability of the foundations across cultures (Davis et al., 2016), and criticisms suggesting bias in the Moral Foundations Questionnaire (Dobolyi, 2016). Moral foundations theory was used in this study because it provides established methods to measure moral content in text, and because MFT-based analyses have identified relationships between political affiliation and moral biases, offering a way to compare LLM and human behavior. The methods presented here may be applicable to other theories of morality; this is left for future work.
Work that aims to elicit normative moral or ethical judgement from non-human systems has received criticism. Authors have argued that nonhuman systems lack the autonomy and communicative intent to be moral agents (Talat et al., 2022; Bender and Koller, 2020). Criticisms have also been raised about the quality and appropriateness of data used to train such systems. Notably, crowdsourced or repurposed data often reflects *a priori* opinions of individuals who may not be informed about the topics they are asked to judge, and who may not have had the opportunity for discourse or reflection before responding (Talat et al., 2022; Etienne, 2021). Some have argued that systems that aggregate moral judgements from descriptive datasets cannot help but be seen as normative, since their reproduction of the popular or average view tends to be implicitly identified with a sense of correctness
(Talat et al., 2022). Finally, several authors argue that the use of non-human systems that produce apparent or intended normative judgements sets a dangerous precedent by short-circuiting the discursive process by which moral and ethical progress is made, and by obscuring accountability should such a system cause harm (Talat et al., 2022; Etienne, 2021).
The present study investigates the apparent moral rationalizations produced by prompted LLMs. This study does not intend to produce a system for normative judgement, and I would discourage a normative use or interpretation of the methods and results presented here. The recent sea change in natural language processing towards general-purpose LLMs prompted into specific behaviors enables end users to produce a range of outputs of convincing quality, including apparent normative moral or ethical judgements. Anticipating how these systems will impact end users and society requires studying model behaviors under a variety of prompting inputs. The present study was conducted with this goal in mind, under the belief that the benefit of understanding the moral mimicry phenomenon outweighs the risk of normative interpretation.
## 5 Related Work
Several machine ethics projects have assessed the extent to which LLM-based systems can mimic human normative ethical judgement, for example
(Hendrycks et al., 2021) and (Jiang et al., 2021).
Other projects evaluate whether LLMs can produce the relevant moral norms for a given scenario
(Forbes et al., 2020; Emelin et al., 2021), or whether they can determine which scenarios justify moral exceptions (Jin et al., 2022). Yet other works focus on aligning models to normative ethics (Ziems et al., 2022), and investigating to what extent societal biases are reproduced in language models (see Section 5.1 of Bommasani et al. 2022). As an example, Fraser, Kiritchenko, and Balkir (2022) analyze responses of the Delphi model (Jiang et al., 2021)
to the Moral Foundations Questionnaire (Graham et al., 2011), finding that its responses reflect the moral foundational biases of the groups that produced the model and its training data.
The aforementioned research directions typically investigate language models not prompted with any particular identity. This framing implies the pretrained model itself as the locus where a cohesive set of biases might exist. Recent work suggests an alternative view that a single model may be capable of simulating a multitude of "identities", and that these apparent identities may be selected from by conditioning the model via prompting (Argyle et al., 2023; Aher et al., 2023). Drawing on the latter view, the present study prompts LLMs to simulate behavior corresponding to opposed political identities, and evaluates the fidelity of these simulacra with respect to moral foundational bias. Relations between the present work and other works taking this
"simulation" view are summarized below.
Arora et. al. probe for cultural values using Hofstede's six-dimenension theory (Hofstede, 2001)
and the World Values Survey (Survey, 2022), and use prompt language rather than prompt tokens to condition the model with a cultural "identity".
Alshomary et al. 2021 and Qian et al. 2021 finetune GPT-2 models (1.5B parameters) on domainspecific corpora, and condition text generation with stances on social issues. The present work, in contrast, conditions on political identity rather than stance, evaluates larger models without domainspecific fine-tuning, and investigates LLM capabilities to mimic moral preferences. Concurrent work probes language models for behaviors including sycophancy, the tendency to mirror users' political views in a dialog setting (Perez et al., 2022).
Perez *et. al.* find that this tendency increases with scale above ~10B parameters. While sycophancy describes how model-generated text appears to express political views, conditioned on dialog user political views, moral mimicry describes how modelgenerated text appears to express moral foundational salience, conditioned on political identity labels. Argyle et. al. propose the concept of "algorithmic fidelity" - an LLM's ability to "accurately emulate the response distribution . . . of human subgroups" under proper conditioning (Argyle et al.,
2023). Moral mimicry can be seen as an instance of algorithmic fidelity where moral foundation use is the response variable of interest. Argyle et. al. study other response variables: partisan descriptors, voting patterns, and correlational structure in survey responses.
## 6 Conclusion
This study evaluates whether LLMs can reproduce the moral foundational biases associated with social groups, a capability herein coined *moral mimicry*.
I measure the apparent use of five moral foundations in the text generated by pre-trained language models conditioned with a political identity. I show that LLMs reproduce the moral foundational biases associated with liberal and conservative political identities, modify their moral foundation use situationally, although not indistinguishably from humans, and that moral mimicry may relate to model size.
## Acknowledgements
I would like to thank the anonymous reviewers who provided valuable comments on this paper. I would also like to thank Professors Dipak Ghosal, Jiawei Zhang, and Patrice Koehl, who provided valuable feedback on this work, and colleagues, friends, and family for insightful discussions.
## References
Gati Aher, Rosa I. Arriaga, and Adam Tauman Kalai.
2023. Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies.
Milad Alshomary, Wei-Fan Chen, Timon Gurcke, and Henning Wachsmuth. 2021. Belief-based Generation of Argumentative Claims. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 224–233, Online. Association for Computational Linguistics.
Milad Alshomary and Henning Wachsmuth. 2021. Toward audience-aware argument generation. *Patterns*,
2(6):100253.
Lisa P. Argyle, Ethan C. Busby, Nancy Fulda, Joshua R.
Gubler, Christopher Rytting, and David Wingate.
2023. Out of One, Many: Using Language Models to Simulate Human Samples. *Political Analysis*,
pages 1–15.
Arnav Arora, Lucie-aimee Kaffee, and Isabelle Augenstein. 2023. Probing pre-trained language models for cross-cultural differences in values. In *Proceedings* of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pages 114–130, Dubrovnik, Croatia. Association for Computational Linguistics.
Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5185–5198, Online. Association for Computational Linguistics.
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S.
Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2022.
On the Opportunities and Risks of Foundation Models.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural* Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc.
Don E. Davis, Kenneth Rice, Daryl R. Van Tongeren, Joshua N. Hook, Cirleen DeBlaere, Everett L. Worthington Jr., and Elise Choe. 2016. The moral foundations hypothesis does not replicate well in Black samples. *Journal of Personality and Social Psychology*, 110(4):e23–e30.
DeepSpeed. 2022. ZeRO-Inference: Democratizing massive model inference.
https://www.deepspeed.ai/2022/09/09/zeroinference.html.
David Dobolyi. 2016. Critiques | Moral Foundations Theory.
Burak Dogruyol, Sinan Alper, and Onurcan Yilmaz. ˘
2019. The five-factor model of the moral foundations theory is stable across WEIRD and non-WEIRD
cultures. *Personality and Individual Differences*,
151:109547.
Maxim Egorov, Karianne Kalshoven, Armin Pircher Verdorfer, and Claudia Peus. 2020. It's a Match: Moralization and the Effects of Moral Foundations Congruence on Ethical and Unethical Leadership Perception.
Journal of Business Ethics, 167(4):707–723.
Denis Emelin, Ronan Le Bras, Jena D. Hwang, Maxwell Forbes, and Yejin Choi. 2021. Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 698–718, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hubert Etienne. 2021. The dark side of the 'Moral Machine' and the fallacy of computational ethical decision-making for autonomous vehicles. *Law, Innovation and Technology*, 13(1):85–107.
Matthew Feinberg and Robb Willer. 2015. From Gulf to Bridge: When Do Moral Arguments Facilitate Political Influence? Personality and Social Psychology Bulletin, 41(12):1665–1681.
Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 653–670, Online. Association for Computational Linguistics.
Kathleen C. Fraser, Svetlana Kiritchenko, and Esma Balkir. 2022. Does Moral Code have a Moral Code?
Probing Delphi's Moral Philosophy. In *Proceedings* of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pages 26–42, Seattle, U.S.A. Association for Computational Linguistics.
Jeremy Frimer. 2019. Moral Foundations Dictionary 2.0.
Jeremy A. Frimer. 2020. Do liberals and conservatives use different moral languages? Two replications and six extensions of Graham, Haidt, and Nosek's (2009)
moral text analysis. *Journal of Research in Personality*, 84:103906.
Leo Gao. 2021. On the Sizes of OpenAI API Models.
https://blog.eleuther.ai/gpt3-model-sizes/.
Justin Garten, Reihane Boghrati, J. Hoover, Kate M.
Johnson, and Morteza Dehghani. 2016. Morality Between the Lines : Detecting Moral Sentiment In Text.
Jesse Graham, Jonathan Haidt, and Brian A. Nosek.
2009. Liberals and conservatives rely on different sets of moral foundations. *Journal of Personality and* Social Psychology, 96(5):1029–1046.
Jesse Graham, Brian A. Nosek, Jonathan Haidt, Ravi Iyer, Spassena Koleva, and Peter H. Ditto. 2011. Mapping the Moral Domain. *Journal of personality and* social psychology, 101(2):366–385.
Jonathan Haidt. 2013. *The Righteous Mind: Why Good* People Are Divided by Politics and Religion. Vintage Books.
Craig A. Harper and Darren Rhodes. 2021. Reanalysing the factor structure of the moral foundations questionnaire. *The British Journal of Social Psychology*,
60(4):1303–1329.
Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt.
2021. Aligning AI with shared human values. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Geert Hofstede. 2001. Culture's Recent Consequences:
Using Dimension Scores in Theory and Research.
International Journal of Cross Cultural Management, 1(1):11–17.
291
Frederic R. Hopp, Jacob T. Fisher, Devin Cornell, Richard Huskey, and René Weber. 2021. The extended Moral Foundations Dictionary (eMFD): Development and applications of a crowd-sourced approach to extracting moral intuitions from text. *Behavior Research Methods*, 53(1):232–246.
Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. 2023. OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization.
Hang Jiang, Doug Beeferman, Brandon Roy, and Deb Roy. 2022. CommunityLM: Probing partisan worldviews from language models. In *Proceedings of the* 29th International Conference on Computational Linguistics, pages 6818–6826, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jenny Liang, Jesse Dodge, Keisuke Sakaguchi, Maxwell Forbes, Jon Borchardt, Saadia Gabriel, Yulia Tsvetkov, Oren Etzioni, Maarten Sap, Regina Rini, and Yejin Choi. 2021. Can Machines Learn Morality? The Delphi Experiment.
Zhijing Jin, Sydney Levine, Fernando Gonzalez Adauto, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, and Bernhard Schölkopf.
2022. When to make exceptions: Exploring language models as accounts of human moral judgment. In NeurIPS.
Kristen Johnson and Dan Goldwasser. 2018. Classification of Moral Foundations in Microblog Political Discourse. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 720–730, Melbourne, Australia. Association for Computational Linguistics.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B.
Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling Laws for Neural Language Models.
Rabia I. Kodapanakkal, Mark J. Brandt, Christoph Kogler, and Ilja van Beest. 2022. Moral Frames Are Persuasive and Moralize Attitudes; Nonmoral Frames Are Persuasive and De-Moralize Attitudes. *Psychological Science*, 33(3):433–449.
Andrew Luttrell, Aviva Philipp-Muller, and Richard E.
Petty. 2019. Challenging Moral Attitudes With Moral Messages. *Psychological Science*, 30(8):1136–1150.
Ece Çigdem Mutlu, Toktam Oghaz, Ege Tütüncüler, and ˘
Ivan Garibay. 2020. Do Bots Have Moral Judgement?
The Difference Between Bots and Humans in Moral Rhetoric. In *2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and* Mining (ASONAM), pages 222–226.
OpenAI. 2021. OpenAI API. https://openai.com/api/.
OpenAI. 2022. Model Index for Researchers.
Matheus C. Pavan, Vitor G. Dos Santos, Alex G. J.
Lan, Joao Martins, Wesley R. Santos, Caio Deutsch, Pablo B. Costa, Fernando C. Hsieh, and Ivandre Paraboni. 2020. Morality Classification in Natural Language Text. *IEEE Transactions on Affective Computing*, pages 1–1.
Ethan Perez, Sam Ringer, Kamile Lukoši ˙ ut¯ e, Karina ˙
Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, Andy Jones, Anna Chen, Ben Mann, Brian Israel, Bryan Seethor, Cameron McKinnon, Christopher Olah, Da Yan, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Guro Khundadze, Jackson Kernion, James Landis, Jamie Kerr, Jared Mueller, Jeeyoon Hyun, Joshua Landau, Kamal Ndousse, Landon Goldberg, Liane Lovitt, Martin Lucas, Michael Sellitto, Miranda Zhang, Neerav Kingsland, Nelson Elhage, Nicholas Joseph, Noemí Mercado, Nova DasSarma, Oliver Rausch, Robin Larson, Sam McCandlish, Scott Johnston, Shauna Kravec, Sheer El Showk, Tamera Lanham, Timothy Telleen-Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Jack Clark, Samuel R. Bowman, Amanda Askell, Roger Grosse, Danny Hernandez, Deep Ganguli, Evan Hubinger, Nicholas Schiefer, and Jared Kaplan. 2022. Discovering Language Model Behaviors with Model-Written Evaluations.
Ming Qian, Jaye Laguardia, and Davis Qian. 2021.
Morality Beyond the Lines: Detecting Moral Sentiment Using AI-Generated Synthetic Context. In *Artificial Intelligence in HCI*, Lecture Notes in Computer Science, pages 84–94, Cham. Springer International Publishing.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Shamik Roy, Nishanth Sridhar Nakshatri, and Dan Goldwasser. 2022. Towards Few-Shot Identification of Morality Frames using In-Context Learning. In Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science
(NLP+CSS), pages 183–196, Abu Dhabi, UAE. Association for Computational Linguistics.
Christopher Suhler and Pat Churchland. 2011. Can Innate, Modular "Foundations" Explain Morality? Challenges for Haidt's Moral Foundations Theory.
Journal of cognitive neuroscience, 23:2103–16; discussion 2117.
World Values Survey. 2022. WVS Database.
https://www.worldvaluessurvey.org/wvs.jsp.
Zeerak Talat, Hagen Blix, Josef Valvoda, Maya Indira Ganesh, Ryan Cotterell, and Adina Williams. 2022.
On the Machine Learning of Ethical Judgments from 292 Natural Language. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 769–779, Seattle, United States. Association for Computational Linguistics.
Nitasha Tiku. 2022. The Google engineer who thinks the company's AI has come to life. *Washington Post*.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Courtney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, and Iason Gabriel.
2022. Taxonomy of Risks posed by Language Models. In *2022 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '22, pages 214–229, New York, NY, USA. Association for Computing Machinery.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. HuggingFace's Transformers: State-of-the-art Natural Language Processing.
Bilge Yalçındag, Türker Özkan, Sevim Cesur, Onurcan ˘
Yilmaz, Beyza Tepe, Zeynep Ecem Piyale, Ali Furkan Biten, and Diane Sunar. 2019. An Investigation of Moral Foundations Theory in Turkey Using Different Measures. *Current Psychology*, 38(2):440–457.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT: Open Pretrained Transformer Language Models.
Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2022. The moral integrity corpus: A
benchmark for ethical dialogue systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 3755–3773, Dublin, Ireland. Association for Computational Linguistics.
# A **Appendix A: Additional Details Related** To Experimental Methods
## A.1 Additional Details Related To Llms Used In The Study
| Model Family Model Variant | Number of Parameters Instruction Fine-tuning | | |
|------------------------------|------------------------------------------------|--------|--------|
| GPT-3 | text-ada-001 | 350M | None |
| GPT-3 | text-babbage-001 1.3B | FeedME | |
| GPT-3 | text-curie-001 | 6.7B | FeedME |
| GPT-3 | text-davinci-001 | 175B | FeedME |
| GPT-3.5 | text-davinci-002 | 175B | ? |
| GPT-3.5 | text-davinci-003 | 175B | PPO |
| OPT | opt-350m | 350M | None |
| OPT | opt-1.3b | 1.3B | None |
| OPT | opt-6.7b | 6.7B | None |
| OPT | opt-13b | 13B | None |
| OPT | opt-30b | 30B | None |
Table 1: Models evaluated in this study. Information for GPT-3 and GPT-3.5 from (OpenAI, 2022). Information for OPT from (Zhang et al., 2022). Information for OPTIML from (Iyer et al., 2023). FeedME: "Supervised fine-tuning on human-written demonstrations and on model samples rated 7/7 by human labelers on an overall quality score" (OpenAI, 2022); PPO: "Reinforcement learning with reward models trained from comparisons by humans" (OpenAI, 2022); ?: use of instruction finetuning is uncertain based on documentation.
## A.2 Additional Details Related To Datasets Used In The Study A.2.1 Preprocessing Details For Moral Stories Dataset
Each example in Moral Stories consists of a *moral* norm (a normative expectation about moral behavior), a *situation* which describes the state of some characters, an *intent* which describes what a particular character wants, and two *paths*, a *moral path* and *immoral path*. Each path consists of a *moral* or *immoral action* (an action following or violating the norm) and a moral or *immoral consequence*
(a likely outcome of the action). For the present experiments, I construct scenarios as the string concatenation of an example's situation, intent, and either moral action or immoral action. We do not use the consequences or norms, as they often include a reason why the action was moral/immoral, and thus could bias the moral foundational contents of the completions.
We used 2,000 scenarios produced from the Moral Stories dataset, consisting of 1,000 randomly-sampled moral scenarios and 1,000 randomly-sampled immoral scenarios.
## A.2.2 Preprocessing Details For Ethics Dataset
The ETHICS dataset contains five subsets of data, each corresponding to a particular ethical framework (deontology, justice, utilitarianism, commonsense, and virtue), each further divided into a "train" and "test" portion. For the present experiments, I
use the "train" split of the "commonsense" portion of the dataset, which contains 13,910 examples of scenarios paired with ground-truth binary labels of ethical acceptability. Of these, 6,661 are "short" examples, which are 1-2 sentences in length. These short examples were sourced from Amazon Mechanical Turk workers and consist of 3,872 moral examples, and 2,789 immoral examples. From these, I randomly select 1,000 examples split evenly according to normative acceptability, resulting in 500 moral scenarios and 500 immoral scenarios.
The train split of the commonsense portion of the ETHICS dataset also contains 7,249 "long" examples, 1-6 paragraphs in length, which were obtained from Reddit. These were unused in the present experiment, primarily due to the increased costs of using longer scenarios.
## A.2.3 Preprocessing Details For Social Chemistry Actions Dataset
The Social Chemistry 101 (Forbes et al., 2020)
dataset contains 355,922 structured annotations of 103,692 situations, drawn from four sources (Dear Abby, Reddit AITA, Reddit Confessions, and sentences from the ROCStories corpus; see (Forbes et al., 2020) for references). Situations are brief descriptions of occurrences in everyday life where social or moral norms may dictate behavior, for example "pulling out of a group project at the last minute". Situations are annotated with Rules-ofThumb (RoTs), which are judgements of actions that occur in the situation, such as "It's bad to not follow through on your commitments". Some situations may contain more than one action, but I
consider situations that are unanimously annotated as having only one action for the present experiment, as this simplifies interpretation of the moral foundation annotations. RoTs in the dataset are annotated with "RoT breakdowns". RoT breakdowns parse each RoT into its constituent action
(e.g. "not following through on commitments") and judgement ("it's bad"). Judgements are standardized to five levels of approval/disapproval: very bad, bad, expected/OK, good, very good. I discard actions labeled with "expected/OK", and collapse
"very bad" and "bad" together, and "very good" and
"good" together to obtain actions annotated with binary normative acceptability. Actions are also annotated with moral foundation labels (the example in the previous sentence was annotated with the Fairness/Cheating and Loyalty/Betrayal foundations). Additionally, each RoT belongs to one of the following categories - morality-ethics, socialnorms, advice, description. I use RoTs belonging to the "morality-ethics" category, since this is the category indicating that the RoT contains moral reasoning rather than advice or etiquette recommendations. After filtering RoTs and situations by category, and selecting examples with unanimous ratings for moral foundation and normative acceptability, I obtain a dataset of 1300 actions - 130 normatively moral actions and 130 normatively immoral actions for each of the five moral foundations.
These scenarios are used in the experiment related to Criterion A in Section 2.1.
## A.2.4 Preprocessing Details For Social Chemistry Situations Dataset
Criterion B requires comparing PH(ef |s) and PLM(ef |s), for human- and LLM-written openended text responses containing moral reasoning about some scenarios. I use situations from the Social Chemistry 101 dataset (Forbes et al., 2020), and use the human-written RoTs to estimate PH(ef |s)
using the moral foundations dictionaries. To estimate consensus human judgement CH(s), I use situations that are multiply annotated. Specifically, I filter the Social Chemistry 101 dataset to situations with 4 or more RoTs, and 4 or more RoT
breakdowns per RoT. This results in a corpus of 170 scenarios. Unlike the Social Chemistry Actions dataset, this Social Chemistry Situations dataset is not foundationally-balanced - I encountered a tradeoff between the minimum number of annotations per situation, and the final corpus size - balancing the dataset in terms of foundations would have reduced the dataset size further. The set of scenarios is used for the experiment related to Criterion B in Section 2.1.
## A.3 Additional Details Related To Moral Foundations Dictionaries A.4 Additional Details Related To Prompt Construction
Templates from Table 2 were used for the Moral Stories, ETHICS, and Social Chemistry Situations datasets, where the scenarios are longer descrip-
![13_image_0.png](13_image_0.png)
tions of events, with length one sentence or longer.
Templates from Table 3 were used for the Social Chemistry Actions dataset, where scenarios are brief action descriptions (sentence fragments). This was done to ensure grammaticality.
![13_image_1.png](13_image_1.png)
Table 2: Prompt template styles for situations
![13_image_2.png](13_image_2.png)
Table 3: Prompt template styles for actions
## B Appendix B: Additional Experimental Results B.1 Effect Size Vs. Dataset
Figure 6 shows effect sizes for liberal vs. conservative prompting, based on completions obtained from 2000 scenarios produced from Moral Stories and 1000 scenarios produced from ETHICS. Scores are separated by dictionary and dataset. See Section 2 for the methods used to calculate effect sizes.
Effect sizes and directions are consistent across datasets for the Care/Harm and Authority/Subversion foundations.
## B.2 Effect Size Vs. Prompt Template Style
Figure 7 shows the results obtained from analysis of compeletions obtained from five different prompt styles, as described in 2 .
Effects of liberal vs. conservative political identity are uniform in direction for the Care/Harm and Authority/Subversion foundations. Regardless of the prompt style or dictionary used, the completions contain more Care/Harm words when the liberal political identity is used, and more Authority/Subversion words when the conservative political identity is used. Effects are nearly uniform in direction for the Fairness/Cheating foundation, with liberal political identity resulting in increased use of this foundation for thirteen of fifteen combinations of prompt style and dictionary. Liberal prompting resulted in decreased use of the Fairness/Cheating foundation for prompt styles 1 and 2, when measured using MFDv2.
![14_image_0.png](14_image_0.png)
Results for the Sanctity/Degradation and Loyalty/Betrayal foundations are more varied. Effect directions are uniform for the Sanctity/Degradation foundation when measured with MFDv2 - liberal political identity results in lower Sanctity/Degradation use by 1-2 percent score across all prompt styles. Effects on Sanctity/Degradation are less consistent when measured using MFDv1 or eMFD - liberal prompting resulted in decreased use of Sanctity/Degradation words for only three out of five prompt styles. Measured by the eMFD, liberal prompting results in decreased use of Sanctity/degradation words for four of five prompt styles.
Effect directions are uniform for Loyalty/Betrayal when measured with MFDv1 prompting with liberal political identity results in greater percent scores for Loyalty for all prompt styles. Results are varied when measured with MFDv1 - liberal prompting results in decreased use for only three of five prompt styles. When measured using the eMFD, liberal prompting results in decreased or equal use of the Loyalty/Betrayal foundation across the prompt styles, which is consistent within the dictionary, but is opposite in effect direction in comparison to MFDv1.
## C Appendix C: Llm Output Examples
| Data Properties | | | | |
|-------------------------------------------------------|--------------------------------------------------|-----------------------------------------------|----|----|
| Prompt | Babbage | Curie | | |
| Jany discovers the story of the story of the story.
However, he had on the story of the story of the story of the
accused her recoversian of the story of her header he head | 1. Jane's sanclude is the content state of | | | |
| 1. It's a good thing to do. Keep your food in the
Hoge so that you're not eating your sondwich
as your roommate is eating it. | 1. It's a good way to show that you care. | 1. Taking someone else's food without
pormission is virong.
standwich is her property.
1 | | |
| 2. It's a way to show that you're not a thief. | | | | |
| 2. R's a good thing to do. Don't eat your
sandwich at your roommate its eating it | 3. It's a swey to show that you're not a lar | 3. Jane's roommate has stolen from her befo
to he is likely to have stolen her sandwich.
4. If Jane does not eat her | | |
| 4. It's a swey to show that you're not a | | | | |
| Amber sanda a humanist for a successful for the humanist from the the first control the first control. Amber some on incol
soons, Amber devides to do for own hair as a contr | 1. It's experiencies. | 1. Amber could have easily porte to a saids
charges more for hair centrons. | 1. The hairstyles is consisted by a constructing.
2. The hairstyles is not providing a quality
sequent of lochiques.
3. The hairstyles is taking advantage of Amber | 1. If is not honest to ask for a quote and then
not follow through with the contee. |
| 2. You might not like the results. | 2. It is not fair to the hairstylist who took the
time to respond to Arrber's message. | | | |
| 2. Amber could have casily gone to a salo
is more experienced in doing hair for | | | | |
| 3. You might not be happy with the sealts. | construction | | | |
| . You might not be happy with the services. | 2. It is not considents of Ambir's time or the | | | |
| Is more experienced in doing hair for women of | | | | |
| Kyle has is gradualing for college and her
hornships of wants to others to stored
so the indeed to the film of the film of the film of the store is the film of the store of t | 1. It's a great opportunity to meet new page
and chat with those you've grown to love. | 1. Kyla is making a decision that will affect i
fulare and the fulare of her friends. | Kyla is gricketh cologies and his francis
asket he colonials with then an an in respond to
the invite of her freeds. Kyla kills her his francis
that she will go out with them | Kyla is frienda are important to ber, and she
possible, and she warne to show it with them.
Accitionaly, going out with her friends will help |
| 2. Your friends will be proud to see you | 2. Kyla is taking a step in the right direct
should be celebrated for it. | | | |
| res and share memories that will stay | 3. Kyla is showing that she is mature and
capable of making her own decisions. | | | |
| Oraig is out of a job and has a computed a good interest of the Croig works
to tax money from home. Croig signa up to
a cordervally, hore in the monal inversity of the con | 1. Craig is taking advantage of people who are
2. Craig is not getting paid for the time he
not getting paid for he information
3. Craig is not preting paid for the informati | | | |
| 1. Survivo are olvoys use to get personal
information about people without their
knowledge or consert. | 1. It is possible for someone to cam money
from home without doing any survivs. | 1. Craig is not using his talents and skills to
their fullest potental. | | |
| 2. Craig knowledge that he is being take
advantage of will cause more propie to
for these turneys. | 2. It is possible for someone to earn mone
from home without doing any work. | 2. Craig is not contributing
meaningful way: | | |
| 3. It is possible for someone to earn money | 3. Craig is not providing salus to others. | | | |
| 3. These surveys oculd be more efficiently
without the use of such information. | ving any | 4. Craig is not working towards his goals and | | |
| 4. It is possible | 5. Craig in | | | |
| Lary is at the non-service of the content to the cont | 1. It's cruel and it's sad. | 1. It's a form of child abuse. | 1. Lary is using physical force to discipline his
discipline is not a roller offective will
2. Lary is the large to sond his son away to a
farway country when he may not be s | 1. It is not appropriate to threaten your child
with physical harm in order to get them to
comply with your wahes. |
| 2. It's a form of nactors. | | | | |
| Z. It's a way to discipline his son | 2. It is not appropriate to terrorism your child | | | |
| 3. It's a way to get his son to stop crying | 2. It's a form of bigotry. | none of the state. | | |
| oral thing to d | 4. It's a way to got him attention | 4. It's a form of ounly. | | |
| 5. It's a way to got him to stop | 6. It's a form of softshness | 3. It is not appropriate to use foor as | | |
| 6. ICS A | | | | |
Figure 8: Examples of completions obtained from Moral Stories dataset, from OpenAI models of increasing size.
Examples were randomly selected |
zhang-etal-2023-leco | {LECO}: Improving Early Exiting via Learned Exits and Comparison-based Exiting Mechanism | https://aclanthology.org/2023.acl-srw.43 | Recently, dynamic early exiting has attracted much attention since it can accelerate the inference speed of pre-trained models (PTMs). However, previous work on early exiting has neglected the intermediate exits{'} architectural designs. In this work, we propose a novel framework, Learned Exits and COmparison-based early exiting (LECO) to improve PTMs{'} early exiting performances. First, to fully uncover the potentials of multi-exit BERT, we design a novel search space for intermediate exits and employ the idea of differentiable neural architecture search (DNAS) to design proper exit architectures for different intermediate layers automatically. Second, we propose a simple-yet-effective comparison-based early exiting mechanism (COBEE), which can help PTMs achieve better performance and speedup tradeoffs. Extensive experiments show that our LECO achieves the SOTA performances for multi-exit BERT training and dynamic early exiting. | # Leco: Improving Early Exiting Via Learned Exits And **Comparison-Based** Exiting Mechanism
Jingfan Zhang1, Ming Tan2, Pengyu Dai3,4**, Wei Zhu**5∗
1 University of Ottawa, Canada 2 Southern University of Science and Technology, China 3 Chongqing University of Post and Telecommunication, China 4 Brunel University, London 5 East China Normal University, China
## Abstract
Recently, dynamic early exiting has attracted much attention since it can accelerate the inference speed of pre-trained models (PTMs).
However, previous work on early exiting has neglected the intermediate exits' architectural designs. In this work, we propose a novel framework, Learned Exits and COmparison-based early exiting (LECO) to improve PTMs' early exiting performances. First, to fully uncover the potentials of multi-exit BERT, we design a novel search space for intermediate exits and employ the idea of differentiable neural architecture search (DNAS) to design proper exit architectures for different intermediate layers automatically. Second, we propose a simpleyet-effective comparison-based early exiting mechanism (COBEE), which can help PTMs achieve better performance and speedup tradeoffs. Extensive experiments show that our LECO achieves the SOTA performances for multi-exit BERT training and dynamic early exiting.
## 1 Introduction
Despite achieving state-of-the-art (SOTA) performances on almost all the natural language processing (NLP) tasks (Lin et al., 2021), large pre-trained language models (PLMs) still have difficulty being applied to many industrial scenarios with low latency requirements. Many research works are devoted to speeding up the inference of BERT or other PLMs, such as network pruning (Zhu and Gupta, 2017; Xu et al., 2020a; Fan et al., 2019; Gordon et al., 2020), student network distillation (Sun et al., 2019; Sanh et al., 2019; Jiao et al., 2020), and early exiting (Teerapittayanon et al., 2016; Xin et al.,
2020; Kaya et al., 2019; Xin et al., 2021). Due to its potential in applications, early exiting has attracted much attention in the research field (Xu et al., 2021a). Early exiting requires a multi-exit BERT, a BERT backbone with an intermediate classifier (or exit) installed on each layer. And then, a dynamic early exiting mechanism is applied during the forward pass to ensure efficient inference. Early exiting is in parallel with and can work together with static model compression methods (Tambe et al., 2020). However, the literature focuses less on the training of multi-exit BERT (Teerapittayanon et al., 2016; Xin et al., 2020; Liu et al., 2020; Xin et al., 2021) and there is no literature systematically discussing the architectural design of the intermediate exits.
In this work, we propose a novel framework, Learned Exits and COmparison-based Early exiting (LECO), designated to discover the full potentials of multi-exit BERT in early exiting. First, we design a suitable and comprehensive search space for architectural learning of the intermediate exits
(see Figure 1). Our search space contains candidate activation functions, encoding operations, and pooling operations. We follow the differentiable neural architecture search (DNAS) framework like Liu et al. (2019a); Xie et al. (2019); Chen et al. (2021)
to learn a set of intermediate exits with different architectures automatically. Second, reflecting on the limitations of the patience-based early exiting method PABEE (Zhou et al., 2020), we propose a comparison-based early exiting (COBEE) mechanism. COBEE makes early exiting decisions by comparing the predicted distributions of adjacent intermediate layers.
We conduct extensive experiments and ablation studies on the GLUE benchmark (Wang et al.,
2018). We show that learned intermediate exits of LECO outperform the previous SOTA multi-exiting BERT training methods while adding fewer trainable parameters. Furthermore, our novel dynamic early exiting mechanism COBEE outperforms the previous SOTA early exiting mechanisms. Further analysis shows that: (a) our LECO framework can help to boost the performance of multi-exiting
∗Corresponding author: [email protected]
![1_image_0.png](1_image_0.png)
BERT under different training strategies. (b) our novel dynamic early exiting strategy outperforms the baseline early exiting methods.
Our contributions are as follows:
- We propose a novel framework, LECO, which constructs a search space for intermediate exits and employs a DNAS framework to learn the suitable exits for different layers.
- We propose a novel comparison-based early exiting criterion which can achieve better quality-speed tradeoffs for PTMs.
- We conduct experiments to show that our LECO achieves SOTA performances for multiexit BERT training.
## 2 Related Work 2.1 Inference Acceleration Methods
Since the rise of BERT, there are quite large numbers of literature devoting themselves to speeding up the inference of BERT. Standard method include direct network pruning (Zhu and Gupta, 2017; Xu et al., 2020a; Fan et al., 2019; Gordon et al., 2020), distillation (Sun et al., 2019; Sanh et al., 2019; Jiao et al., 2020), Weight quantization (Zhang et al., 2020b; Bai et al., 2020; Kim et al., 2021) and Adaptive inference (Zhou et al., 2020; Xin et al., 2020; Liu et al., 2020). Among them, adaptive inference has drawn much attention. Adaptive inference aims to deal with simple examples with only shallow layers of PLMs, thus speeding up inference time on average.
Early exiting requires a multi-exit model, like a BERT backbone with an intermediate classifier (or exit) installed on each layer. Early exiting literature mainly focuses on the development of the early exiting strategies, that is, determining when an intermediate exit's prediction is suitable as the final model prediction. Score based strategies (Teerapittayanon et al., 2016; Xin et al., 2020; Kaya et al., 2019; Xin et al., 2021), prior based strategies (Sun et al.,
2022) and patience based strategies (Zhou et al., 2020) have been proposed. Teerapittayanon et al.
(2016) uses the entropy of an intermediate layer's predicted distribution to measure the in-confidence level and decide whether to exit early. PABEE asks the model to exit when the current layer's prediction is the same with the previous layers.
Our work complements the literature on early exiting by proposing the LECO framework to improve early exiting performance via the automatic architectural design of exit architectures and a novel early exiting mechanism.
## 2.2 Neural Architecture Search
With the rapid development and wide industrial applications, researchers have devoted great effect in manually designing neural netoweks (Krizhevsky et al., 2012; Simonyan and Zisserman, 2015; He et al., 2016; Huang et al., 2017; Wang et al., 2022). The trend is to stack more and more convolutional or transformer layers to construct a deep network. Recently, when trying to avoid manual architecture design, researchers started considering developing algorithms to design neural networks automatically. Thus, a new research sub-field of automated machine learning
(AutoML) (He et al., 2021) called neural architecture search is established (Zoph and Le, 2017).
In the early attempts, NAS requires massive computations, like thousands of GPU days (Zoph and Le, 2017; Zoph et al., 2018; Liu et al., 2018). Recently, a particular group of one-shot NAS, led by the seminal work DARTS (Liu et al., 2019a) has attracted much attention. DARTS formulates the search space into a super-network that can adjust itself in a continuous space so that the network and architectural parameters can be optimized alternately (bi-level optimization) using gradient descent. A series of literature try to improve the performance and efficiency of DARTS. SNAS (Xie et al., 2019) reformulate DARTS as a credit assignment task while maintaining the differentiability. PDARTS (Chen et al., 2021) analyze the issues during the DARTS bi-level optimization, and propose a series of modifications. PC-DARTS (Xu et al.,
2021b) reduces the memory cost during search by sampling partial channels in super-networks. FairDARTS (Chu et al., 2021) change the softmax operations in DARTS into sigmoid and introduce a penalty term to prune the architectural parameters according to the demand. Gao et al. (2020) make the hyper-network more close to the discretized sub-network by penalizing the entropy of the architecture parameters.
Our work contributes to the NAS literature by investigate the architectural search of intermediate exits to improve the early exiting performances.
## 3 Preliminaries
In this section, we introduce the necessary background for BERT early exiting. we consider the case of multi-class classification with K classes, K = {1, 2*, ..., K*}. The dataset consists of N samples {(xi, yi), i ∈ I = {1, 2*, ..., N*}} , where xiis an input sentence consisting of L words, and yi ∈ K is the label.
## 3.1 Early Exiting
Multi-exit PTM Early exiting is based on multiexit PTM, which is a PTM backbone with classifiers (or exits) at each layer. With M layers, M
classifiers fm(x; θm) are designated at M layers of the PTM, each of which maps its input to the probability distribution on K classes. fm(x; θm) can take the form of a simple linear layer (linear exit)
following (Zhou et al., 2020). However, as is shown in Liu et al. (2020), adding an encoding operation like the multi-head self-attention layer (Vaswani et al., 2017) to the intermediate exits (MHA exits)
can significantly boost the performance of intermediate layers, demonstrating the importance of architectural design.
Training We now introduce the three main multiexit BERT training methods widely adopted in the literature.
JT. Perhaps the most straightforward fine-tuning strategy is to minimize the sum of all classifiers' loss functions and jointly update all parameters in the process. We refer to this strategy as JT. The loss function is:
$${\mathcal{L}}_{J T}=\sum_{m=1}^{M}{\mathcal{L}}_{m}^{C E}$$
$$\mathrm{(1)}$$
where L
CE
m = L
CE
m (*y, f*m(x; θm)) denotes the cross-entropy loss of the m-th exit. This method is adopted by Teerapittayanon et al. (2016); Kaya et al. (2019); Zhou et al. (2020); Zhu (2021).
2ST. The two-stage (2ST) (Xin et al., 2020; Liu et al., 2020) training strategy divides the training procedure into two stages. The first stage is identical to the vanilla BERT fine-tuning, updating the backbone model and only the final exit. In the second stage, we freeze all parameters updated in the first stage and fine-tune the remaining exits separately:
$$\begin{array}{l}\mbox{Stage1}:{\cal L}_{stage1}={\cal L}_{M}^{CE}(y_{i},f_{M}(x_{i};\theta_{M}))\\ \mbox{Stage2}:{\cal L}_{stage2}={\cal L}_{m}^{CE},m=1,...,M-1.\end{array}\tag{2}$$ where ${\cal L}_{m}^{CE}={\cal L}_{m}^{CE}(y_{i},f_{m}(x_{i};\theta_{m}))$ denotes the
cross-entropy loss of m-th exit.
ALT. It alternates between two objectives (taken from Equation 1 and 2) across different epochs, and it was proposed by BERxiT (Xin et al., 2021):
$${\mathrm{Odd}}:{\mathcal{L}}_{s t a g e1}={\mathcal{L}}_{M}^{C E}(y_{i},f_{M}(x_{i};\theta_{M}))\qquad{\mathrm{(4)}}$$ $${\mathrm{Even}}:{\mathcal{L}}_{j o i n t}=\sum_{m=1}^{M}{\mathcal{L}}_{m}^{C E}\qquad\qquad\qquad{\mathrm{(5)}}$$
For the search and training of our LECO method, we adopt the joint training (JT) method, following Teerapittayanon et al. (2016); Kaya et al. (2019);
Zhou et al. (2020); Zhu (2021). LECO mainly employs JT to fine-tune the PTM backbone and simultaneously learn the best exit architectures for all intermediate layers under a differentiable NAS
framework.
Early exiting inference At inference, the multiexit PLM can operate in two different modes: (a)
static early exiting, that is, a suitable exit m∗is appointed to predict all queries. (b) Dynamic early exiting, the model starts to predict on the classifiers f
(1), f
(2), ..., in turn in a forward pass, until it receives a signal to stop early at an exit m∗ < M,
or arrives at the last exit M.
## 3.1.1 Inference Speedup Ratio
During inference, we will run the test samples with batch size one following Zhou et al. (2020);
Teerapittayanon et al. (2016). We report the actual wall-clock run-time reduction as the efficiency metric. For each test sample xi, denote the inference time cost under early exiting as ti, and time cost under no early exiting as Ti. Then the average speedup ratio on the test set is calculated by Speedup = 1−
PN*test* 1ti PN*test* 1Ti
, where N*test* is the number of samples on the test set. We will run the test set ten times and report the average speedup ratio to avoid randomness of run-time.
## 3.2 Preliminaries On Darts
Assume there is a pre-defined space of operations denoted by O, where each element, o(·),
denotes a neural network operation, such as convolutional operation, self-attention, and activation.
DARTS (Liu et al., 2019a) operates on a search cell, a fully connected directed acyclic graph (DAG)
with N nodes. Let (*i, j*) denote a pair of nodes.
The core idea of DARTS is to initialize a supernetwork stacked with blocks with the same architecture as the DAG. During the search, each edge in the DAG is a weighted sum including all |O| operations in O, fi,j (zi) = Po∈O a o i,j · o(zi), where a o i,j =
exp α o i,j Po
′∈O exp α o
′
i,j
, zi denotes the output of the i-th node, and α o i,j is the architectural parameters that represent the weight (or the importance score) of o(·) in edge (*i, j*). The output of a node is the sum of all input flow, i.e., zj =Pi<j fi,j (zi).
The output of the entire cell is formed by summing the last two nodes.
This design makes the entire framework differentiable to layer weights and architectural parameters α o i,j so that it can perform architecture searches in an end-to-end fashion. The standard optimization method is the bi-level optimization proposed in DARTS. After the search process is completed, the discretization procedure extracts the final subnetwork by dropping the operations receiving lower scores.
## 4 Search Space Of Leco
As depicted in Figure 1, we construct the search space of a LECO intermediate exit mimicking the MHA exit. Representations of the current BERT
layer, H
(m)
i, will first be down-sampled to a smaller dimension Rde(e.g., 64) to keep the intermediate exit parameter-efficient.1 Then, it will go through an activation cell, an encoder cell, a pooling cell, and finally, another activation cell. The whole DAG
of the intermediate exit consists of 7 edges.
Activation cell Both activations cells are onestep DAGs (Figure 1), designated to choose the proper activation function from several candidates.
Similar to So et al. (2019), the collection of activation functions we consider is: (a) **ReLU** (Agarap, 2018); (b) **GeLU** (Hendrycks and Gimpel, 2016);
(c) **SWISH** (Ramachandran et al., 2017); (d)
Tanh (Krizhevsky et al., 2012); (e) **NullAct**, which means making no changes to the input.
Encoder cell As is shown in Figure 1, different from Wang et al. (2020); Zhu et al. (2021a),
we construct our encoder cell as a simple DAG,
which consists of at most two encoder operations.
Encoder operations 1 and 2 will encode the cell's input, and their outputs will be summed to be the output of the encoder cell. As an extension to the encoder search space of Wang et al. (2020); Zhu et al. (2021a); Chen et al. (2020), our collection of encoder operations consists of the following commonly used encoding operations: (a) 1-d convolutional layers, with stride 1, same padding, output filters equal to the input's dimension, and kernel size equal to 1, 3, or 5 (denoted as **conv_**k, k = 1, 3, 5);
(b) multi-head self-attention layer (Vaswani et al.,
2017), with k = 2, 4, 8 attention heads, head size equaling de/k (denoted as **mha_**k, k = 2, 4, 8); (c)
skip-connection, denoted as **skip-connect**; (d) the null encoding operation that multiply zero tensors to the input (**null**).2 Pooling cell It is also a one-step DAG for selecting the proper pooling layer. The most commonly used pooling operation for PTM-based models is to extract the representations of the [CLS] token (denoted as **cls_pool**). As is summarized in Gong et al.
(2018), other commonly used pooling operations are: max pooling (**max_pool**); average pooling
(**avg_pool**); self-attention based pooling (**sa_pool**).
Note that our search space contains the MHA
exit (introduced in Section 3.1) as a special case.
The above search space can result in 6.87e+34 combinations of different multi-exit BERT. We will mainly follow DARTS (Liu et al., 2019a) to search for the optimal architecture designs of exits. But different from (Liu et al., 2019a), we adopt a macro search space, that is, the exits from different layers have different architectural parameters, thus resulting different architectures for different layers.
## 5 Comparison-Based Early Exiting
The patience-based mechanism (Zhou et al.,
2020) validates the early exiting decisions among the previous layers, providing a promising direction for designing early exiting mechanisms. the early exiting condition in PABEE is coarse: it directly compares the predicted labels. However, tt is common for BERT to change its predictions after a few intermediate layers. Thus, PABEE's early exiting performances with low patience parameters may not be reliable. To summarize, we need a more fine-grained criterion to generate more reliable early exiting signals.
We now introduce our Comparison-based early exiting method, COBEE. The inference procedure is illustrated in Figure 1. Assume the forward pass has reached layer *m < M*. We now compare the predicted distributions of layer m and layer m
′
(*m > m*
′) as follows. Denote the label that receives the highest probability mass at layer m as k∗m, and the probability distribution of exit m is denoted as Prm, then the disagreement between layer m and layer m
′is calculated as:
$$\mathbf{Di}(\mathbf{Pr}_{m},\mathbf{Pr}_{m^{\prime}})=|\mathbf{Pr}_{m}(k_{m}^{*})-\mathbf{Pr}_{m^{\prime}}(k_{m}^{*})|.\tag{6}$$
For simplicity, we denote dim,m
′ =
Di(Prm, Prm
′) ∈ R. The smaller the value of dim,m
′ , the predicted distributions Prm and Prm
′ are more consistent with each other. We use a counter cnt to store the number of times the disagreement scores between adjacent layers are less than the pre-defined exiting threshold τ . At layer m, cntm is calculated as:
$$c n t_{m}=\begin{cases}c n t_{m-1}+1,&\text{if}\operatorname{di}_{m,m-1}<\tau,\\ 0,&\text{otherwise.}\end{cases}\quad(7)$$
If dim,m−1 is less than the pre-defined threshold, then the patience counter is increased by 1. Otherwise, the patience counter is reset to 0. If cntm reaches the pre-defined patience value t, the model stops inference and exits early. Otherwise, the model goes to the next layer. However, if the model does not exit early at intermediate layers, the model uses the final classifier fM for prediction.
## 6 Experiments 6.1 Datasets
We evaluate our proposed approach to the classification tasks on GLUE benchmark (Wang et al.,
2018). We only exclude the STS-B task since it is a regression task, and we exclude the WNLI task following previous work (Devlin et al., 2019; Jiao et al., 2020; Xu et al., 2020b). Since the original test sets are not publicly available, we follow Zhang et al. (2020a) and Mahabadi et al. (2021)
to construct the train/dev/test splits as follows: (a)
for datasets with fewer than 10k samples (RTE,
MRPC, CoLA), we divide the original validation set in half, using one half for validation and the other for testing. (b) for larger datasets, we split 1k samples from the training set as the development set, and use the original development set as the test set. The detailed dataset statistics are presented in Table 1.
For MNLI, we report acc, which is the average of the accuracy scores on the matched and mismatched test set. For MRPC and QQP, we report acc-f1, which is the average of accuracy and F1 scores. For CoLA, we report mcc, which is the Matthews correlation. For all other tasks, we report accuracy (acc).
## 6.2 Baseline Methods
We compare our LECO framework with the following baselines:
Multi-exiting model training For multi-exit model training, we compare: (a) Joint training (JT)
(Zhou et al., 2020; Teerapittayanon et al., 2016),
with both a linear exit and an MHA exit (de = 64);
(b) two-stage training (2ST) (Liu et al., 2020; Xin et al., 2020), with an MHA exit (de = 64); (c) alternating training (ALT) in Xin et al. (2021); (d) the
Category Datasets |train| |dev| |test| |Y| Type Labels
| Sentence-pair |
|-----------------|
Single-sentence SST-2 66349 1000 872 2 sentiment positive, negative
CoLA 8551 521 522 2 linguistic acceptability acceptable, not acceptable
Gradient Equilibrium technique (GradEquil) (Li et al., 2019), which incorporates JT with gradient adjustments and is adopted by Liu et al. (2021); (e)
Global Past Future (Liao et al., 2021) (Global-PF)
which asks the lower layers to imitate the deeper layers; (f) GAML-BERT (Zhu et al., 2021b), which employs a mutual learning strategy to improve the performances of shallow exits.
Early exiting methods We compare the early exiting performances of our COBEE method on the multi-exit backbone trained under the LECO framework with the following methods: (a) Entropybased method (Entropy) originated from (Teerapittayanon et al., 2016), which is equivalent to the maximum-probability based method Schwartz et al. (2020); (b) Patience-based method (Patience) (Zhou et al., 2020); (c) learning-to-exit based method (LTE) proposed by Xin et al. (2021),
which train an extra meta-classifier to estimate the confidence on a sample and achieves the SOTA
performances of early exiting. For comparison, we also run the patience-based method on the backbone obtained by the JT method with linear exits.
| CoLA | 8551 | 521 | 522 | 2 | linguistic acceptability | acceptable, not acceptable |
|--------|--------|-------|-------|-----|----------------------------|------------------------------------|
| MNLI | 391702 | 1000 | 19647 | 3 | NLI | entailment, neutral, contradiction |
| MRPC | 3668 | 204 | 204 | 2 | paraphrase | equivalent, not equivalent |
| QNLI | 103743 | 1000 | 5463 | 2 | NLI | entailment, not entailment |
| QQP | 362846 | 1000 | 40430 | 2 | paraphrase | equivalent, not equivalent |
| RTE | 2490 | 138 | 139 | 2 | NLI | entailment, not entailment |
## 6.3 Experimental Settings
Devices We implement LECO on the base of HuggingFace's Transformers. We conduct our experiments on Nvidia V100 16GB GPUs.
PTM models. We mainly adopt the ALBERT
base (Lan et al., 2019) backbone. We will also include RoBERTa-base (Liu et al., 2019b), and DeBERTa-base (He et al., 2020) in the ablation studies.
Settings for Architecture search We add a LECO search cell (Figure 1) with dimension de equal to 32 on each intermediate layer of the PTM
and adopt the DARTS (Liu et al., 2019a) method to learn the best exit architecture for each layer.
AdamW optimizer (Loshchilov and Hutter, 2019)
is used for both the model and architecture parameters. At the beginning of each epoch, the training set is randomly split into D1 (for updating model parameters) and D2 (for updating architecture parameters) with a ratio of 1 : 1. The search will last for 30 epochs. The learning rate is 2e-5 for model parameters and 2e-4 for architectural parameters.
The search procedure is run once on each GLUE
task.
Settings for Architecture evaluation After the search procedure ends, the top-scored sub-network is discretized from the super-network at each layer and will be trained from scratch as the final learned exit. The learning rate is 2e-5, and AdamW optimizer (Loshchilov and Hutter, 2019) is used for optimization. We evaluate the dev set and save the checkpoint after each epoch. After training ends, we evaluate the best checkpoint on the test set. We train the final learned exits under 5 random seeds to obtain its average test performance.
## 6.4 Main Results
Comparison of multi-exit model training methods Table 2 reports the main results on the GLUE
benchmark with ALBERT as the backbone model.
All baseline models are run with the original authors' open-sourced codes. We report AVG, the cross-layer average score, and BEST, the best score among all the intermediate layers. From Table 2, Our LECO method outperforms the previous multiexit model training methods in terms of the AVG
scores (with statistical significance), demonstrating that our LECO framework effectively boosts the overall performances of intermediate exits and thus providing stronger backbones for early exiting.
Note that both 2ST + MHA exit (Liu et al., 2020)
and JT + MHA exit introduce 66k parameters per exit, while the LECO method adds 25k-26k parameters per exit. The comparison among the three methods demonstrates that our LECO method does not rely only on adding more parameters to obtain performance improvements. The improvements of LECO result from better architectural designs for
| RTE | MRPC | CoLA | SST-2 | QNLI | QQP | MNLI | | | | | | | | |
|---------------------|--------|--------|---------|--------|-------|--------|-------|------|-------|------|-------|------|-------|------|
| Baseline methods | | | | | | | | | | | | | | |
| AVG | BEST | AVG | BEST | AVG | BEST | AVG | BEST | AVG | BEST | AVG | BEST | AVG | BEST | |
| JT + linear exit | 66.8 | 72.5 | 83.7 | 87.9 | 43.7 | 53.3 | 89.2 | 91.1 | 82.6 | 87.3 | 82.2 | 87.2 | 76.0 | 83.1 |
| JT + MHA exit | 68.1 | 76.9 | 84.1 | 88.2 | 43.6 | 57.5 | 88.2 | 91.5 | 82.8 | 87.6 | 82.4 | 87.1 | 76.8 | 83.2 |
| GradEquil | 67.3 | 77.4 | 84.2 | 89.3 | 43.6 | 56.1 | 89.2 | 91.8 | 82.4 | 88.0 | 82.7 | 87.0 | 76.5 | 83.6 |
| ALT | 68.5 | 77.8 | 84.6 | 88.3 | 44.1 | 57.3 | 88.9 | 91.6 | 82.3 | 87.8 | 82.5 | 86.8 | 76.6 | 83.2 |
| GAML-BERT | 68.8 | 77.6 | 84.9 | 88.8 | 45.0 | 57.9 | 89.1 | 92.3 | 82.6 | 87.9 | 82.6 | 87.5 | 75.9 | 83.4 |
| Global-PF | 68.5 | 78.1 | 84.9 | 88.6 | 45.1 | 57.7 | 88.9 | 92.6 | 82.5 | 88.1 | 82.6 | 87.4 | 76.5 | 83.3 |
| 2ST + MHA exit | 68.9 | 77.5 | 85.1 | 89.2 | 45.0 | 57.9 | 89.3 | 92.4 | 82.5 | 88.0 | 82.7 | 87.3 | 76.2 | 82.7 |
| Our proposed method | | | | | | | | | | | | | | |
| LECO | 69.7∗ | 77.9 | 85.8∗ | 89.4 | 46.4∗ | 58.0 | 89.6∗ | 92.5 | 83.4∗ | 88.1 | 83.1∗ | 87.4 | 77.3∗ | 83.4 |
![6_image_0.png](6_image_0.png)
## Exits Of Different Depths.
Comparison of dynamic early exiting mechanisms We compare our COBEE method with the previous best-performing early exiting methods on the multi-exit ALBERT-base backbone trained under our LECO framework (as reported in Table 2). We also run the patience-based early exiting with the multi-exit ALBERT-base trained with the JT method. For the patience-based method
(Zhou et al., 2020), early exiting is run on different patience parameters. For the other methods, we run early exiting under different confidence thresholds or patience parameters so that the speedup-performance curves consist of at least 20 points evenly distributed across the interval
(0, 1) of speedup ratios. The speedup-performance curves for the RTE and SST-2 tasks are plotted in Figure 2.
The following takeaways can also be made from Figure 2: (a) With the same backbone model, our COBEE method achieves better speedupperformance trade-offs than the previous SOTA
early exiting methods, especially when the speedup ratio is large. (b) The comparison between Patience and JT+linear exit: Patience demonstrates that our LECO method can provide superior backbones for early exiting and consistently result in superior performances under different speedup ratios, even though introducing a more complex exit architecture. The learned exit architecture constitutes 0.25% of the parameters on each intermediate layer and increases 0.6% inference latency on average. However, the performance gains on the intermediate layers clearly out-weights the increased latency.
## 6.5 Discussions And Ablation Studies
Discussion on the learned architectures Table 6 of the Appendix A presents the best-learned exit architectures on each layer of ALBERT when the downstream task is MRPC or RTE. Three observations can be made: (a) although we allow at most two encoder operations in the encoder search cell, more than half of the learned exits include one valid encoding operation, making the exits more parameter efficient. (b) The learned archi-
| Method | AVG score | |
|----------------|-------------|-------|
| - | RTE | SST-2 |
| LECO | 69.7 | 89.6 |
| 2ST + MHA exit | 68.9 | 89.3 |
| 2ST + LECO | 69.6 | 89.5 |
| ALT | 68.5 | 88.9 |
| ALT + LECO | 69.3 | 89.4 |
tectures tend to use a pair of different activation functions, which is different from the combination of the Tanh-Tanh activation functions applied in the MHA exit (Liu et al., 2020). (c) Most exits do not select the **cls_pool** pooling operation, validating the necessity of our pooler search cell.
LECO works well with other multi-exit training strategies In the main experiments, we train LECO with the JT method. Table 3 demonstrates the results of LECO when trained with 2ST and ALT. The results show that LECO can effectively improve the performances of 2ST and ALT, and achieve comparable results with LECO combined with JT. However, the JT method is more convenient and takes less training time.
LECO works well with other pretrained backbones We now substitute the pretrained backbone to RoBERTa-base (Liu et al., 2019b) and DeBERTa-base (He et al., 2020), and the results are reported in Table 4. We can see that our LECO framework can also help to improve the average performance of multi-exit RoBERTa/DeBERTa model. An interesting take-away is that RoBERTa and DeBERTa can not outperform ALBERT in terms of AVG scores. We hypothesis that ALBERT
shares parameters across transformer layers, thus the difference between shallow and deep layers are smaller than the other models.
Ablation on the search space We now conduct an ablation study to show the validity of our search space design. We consider reducing our search space O to a singleton step-by-step: (a) reduce the activation cells by only keeping the **Tanh** activation
(O1); (b) further reduce the pooler cell to only include **cls_pool** (O2); (c) further reduce the encoder cell to only include **mha_dot**, and now the search space only contains the MHA exit. Table 5 reports the search results on different search spaces. From Table 5, we can see that dropping any components of the whole search space results in performance
| Method | AVG score | |
|-----------------------|-------------|-------|
| - | RTE | SST-2 |
| ALBERT backbone LECO | 69.7 | 89.6 |
| JT + MHA exit | 68.1 | 88.2 |
| RoBERTa backbone LECO | 68.6 | 88.7 |
| JT + MHA exit | 66.5 | 87.4 |
| DeBERTa backbone LECO | 69.5 | 89.3 |
| JT + MHA exit | 66.9 | 88.1 |
| search space | AVG score | |
|----------------|-------------|-------|
| - | RTE | SST-2 |
| O | 69.7 | 89.6 |
| O1 | 69.3 | 89.1 |
| O2 | 68.9 | 88.7 |
| MHA exit | 68.1 | 88.2 |
losses, demonstrating that our search space design is necessary and beneficial.
## 7 Conclusion
In this work, we propose a novel framework, LECO. Our contributions are three-fold. First, LECO designs a unified search space for architectural designs of intermediate exits. Second, we apply the differentiable NAS framework of DARTS to learn the optimal exit architectures automatically.
Third, we propose a novel comparison based early exiting mechanism, COBEE. Experiments on the GLUE benchmark and ablation studies demonstrate that our LECO framework can achieve SOTA on multi-exit BERT training and outperforms the previously SOTA dynamic early exiting methods.
## Limitation
Although our LECO framework is shown to be effective in improving the multi-exit BERT training, it still has certain limitations that need to be addressed in the future: (a) MHA exits and our learned exits indeed introduce new parameters and additional flops. We would like to explore more parameter-efficient methods to improve multi-exit BERT training in future works. (b) In this work, we demonstrate our framework's performance on sentence classification or pair classification tasks.
In future works, we would like to extend our work to broader tasks such as sequence labeling, relation extraction, and text generation. We would like to explore this aspect in the future.
## Ethics Statement
Our LECO framework is designated to improve the training of multi-exit BERT and dynamic early exiting performances. Our work can facilitate the deployment and applications of pre-trained models on devices with less powerful computation capabilities, making the state-of-the-art models accessible for everyone. In addition, we hope this technology can help reduce the carbon footprints of NLP-based applications. Furthermore, the datasets we experiment with are widely used in previous work and, to our knowledge, does not introduce new ethical concerns.
## References
Abien Fred Agarap. 2018. Deep learning using rectified linear units (relu). *ArXiv*, abs/1803.08375.
Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. 2020. Binarybert: Pushing the limit of bert quantization. *arXiv preprint arXiv:2012.15701*.
Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei Lin, and Jingren Zhou. 2020. Adabert: Taskadaptive bert compression with differentiable neural architecture search. In *IJCAI*.
Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. 2021. Progressive darts: Bridging the optimization gap for nas in the wild. *ArXiv*, abs/1912.10952.
Xiangxiang Chu, Bo Zhang, Ruijun Xu, and Jixiang Li. 2021. Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search. *2021* IEEE/CVF International Conference on Computer Vision (ICCV), pages 12219–12228.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Angela Fan, Edouard Grave, and Armand Joulin. 2019.
Reducing transformer depth on demand with structured dropout. *arXiv preprint arXiv:1909.11556*.
Yuan Gao, Haoping Bai, Zequn Jie, Jiayi Ma, Kui Jia, and Wei Liu. 2020. Mtl-nas: Task-agnostic neural architecture search towards general-purpose multitask learning. *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 11540–11549.
Jingjing Gong, Xipeng Qiu, Shaojing Wang, and Xuanjing Huang. 2018. Information aggregation via dynamic routing for sequence encoding. In *COLING*.
Mitchell A Gordon, Kevin Duh, and Nicholas Andrews.
2020. Compressing bert: Studying the effects of weight pruning on transfer learning. *arXiv preprint* arXiv:2002.08307.
Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun.
2016. Deep residual learning for image recognition.
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decodingenhanced bert with disentangled attention. *ArXiv*,
abs/2006.03654.
Xin He, Kaiyong Zhao, and Xiaowen Chu. 2021. Automl: A survey of the state-of-the-art. Knowl. Based Syst., 212:106622.
Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). *arXiv: Learning*.
Gao Huang, Zhuang Liu, and Kilian Q. Weinberger.
2017. Densely connected convolutional networks.
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261–2269.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020.
Tinybert: Distilling bert for natural language understanding. *ArXiv*, abs/1909.10351.
Y. Kaya, Sanghyun Hong, and T. Dumitras. 2019.
Shallow-deep networks: Understanding and mitigating network overthinking. In *ICML*.
Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W
Mahoney, and Kurt Keutzer. 2021. I-bert: Integeronly bert quantization. In *International conference* on machine learning, pages 5506–5518. PMLR.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. *Communications of the ACM*,
60:84 - 90.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
Hao Li, Hong Zhang, Xiaojuan Qi, Ruigang Yang, and Gao Huang. 2019. Improved techniques for training adaptive deep networks. *2019 IEEE/CVF International Conference on Computer Vision (ICCV)*, pages 1891–1900.
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. *CoRR*, abs/1409.1556.
Kaiyuan Liao, Yi Zhang, Xuancheng Ren, Qi Su, Xu Sun, and Bin He. 2021. A global past-future early exit method for accelerating inference of pretrained language models. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2013–2023, Online.
Association for Computational Linguistics.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019.
Patient knowledge distillation for bert model compression. *arXiv preprint arXiv:1908.09355*.
Tianxiang Sun, Xiangyang Liu, Wei Zhu, Zhichao Geng, Lingling Wu, Yilong He, Yuan Ni, Guotong Xie, Xuanjing Huang, and Xipeng Qiu. 2022. A simple hash-based early exiting approach for language understanding and generation. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 2409–2421, Dublin, Ireland. Association for Computational Linguistics.
Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng Qiu. 2021. A survey of transformers. *ArXiv*,
abs/2106.04554.
Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Loddon Yuille, Jonathan Huang, and Kevin P. Murphy. 2018. Progressive neural architecture search. In *ECCV*.
Thierry Tambe, Coleman Hooper, Lillian Pentecost, EnYu Yang, Marco Donato, Victor Sanh, Alexander M.
Rush, David M. Brooks, and Gu-Yeon Wei. 2020.
Edgebert: Optimizing on-chip inference for multitask nlp. *ArXiv*, abs/2011.14203.
Hanxiao Liu, Karen Simonyan, and Yiming Yang.
2019a. Darts: Differentiable architecture search.
ArXiv, abs/1806.09055.
Surat Teerapittayanon, Bradley McDanel, and H. T.
Kung. 2016. Branchynet: Fast inference via early exiting from deep neural networks. *2016 23rd International Conference on Pattern Recognition (ICPR)*,
pages 2464–2469.
Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Haotang Deng, and Qi Ju. 2020. Fastbert: a selfdistilling bert with adaptive inference time. arXiv preprint arXiv:2004.02178.
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *ArXiv*, abs/1706.03762.
Xiangyang Liu, Tianxiang Sun, Junliang He, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, and Xipeng Qiu. 2021. Towards efficient nlp: A standard evaluation and a strong baseline. In *North American Chapter of the Association for* Computational Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018.
Glue: A multi-task benchmark and analysis platform for natural language understanding. In *BlackboxNLP@EMNLP*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692.
Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. 2022. Deepnet: Scaling transformers to 1, 000 layers. *ArXiv*,
abs/2203.00555.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *ICLR*.
Yujing Wang, Yaming Yang, Yiren Chen, Jing Bai, Ce Zhang, Guinan Su, Xiaoyu Kou, Yunhai Tong, Mao Yang, and Lidong Zhou. 2020. Textnas: A
neural architecture search space tailored for text representation. In *AAAI*.
Prajit Ramachandran, Barret Zoph, and Quoc V. Le.
2017. Swish: a self-gated activation function. arXiv:
Neural and Evolutionary Computing.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108.
Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. Deebert: Dynamic early exiting for accelerating bert inference. arXiv preprint arXiv:2004.12993.
Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A Smith.
2020. The right tool for the job: Matching model and instance complexities. arXiv preprint arXiv:2004.07453.
Ji Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin.
2021. Berxit: Early exiting for bert with better finetuning and extension to regression. In *Proceedings*
David R. So, Chen Liang, and Quoc V. Le. 2019. The evolved transformer. *ArXiv*, abs/1901.11117.
Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. In *NeurIPS*.
Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin.
2019. Snas: Stochastic neural architecture search.
ArXiv, abs/1812.09926.
of the 16th conference of the European chapter of the association for computational linguistics: Main Volume, pages 91–104.
Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020a. Bert-of-theseus: Compressing bert by progressive module replacing. arXiv preprint arXiv:2002.02925.
Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020b. Bert-of-theseus: Compressing bert by progressive module replacing. In EMNLP.
Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, and Lei Li. 2021a. A survey on green deep learning.
ArXiv, abs/2111.05193.
Yuhui Xu, Lingxi Xie, Wenrui Dai, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Hongkai Xiong, and Qi Tian.
2021b. Partially-connected neural architecture search for reduced computational redundancy. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 43:2953–2970.
Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q.
Weinberger, and Yoav Artzi. 2020a. Revisiting fewsample bert fine-tuning. *ArXiv*, abs/2006.05987.
Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. 2020b. Ternarybert:
Distillation-aware ultra-low bit bert. *arXiv preprint* arXiv:2009.12812.
Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. 2020. Bert loses patience: Fast and robust inference with early exit.
Advances in Neural Information Processing Systems, 33:18330–18341.
Michael Zhu and Suyog Gupta. 2017. To prune, or not to prune: exploring the efficacy of pruning for model compression. *arXiv preprint arXiv:1710.01878*.
Wei Zhu. 2021. Leebert: Learned early exit for bert with cross-level optimization. In ACL.
Wei Zhu, Yuan Ni, Xiaoling Wang, and Guo Tong Xie.
2021a. Discovering better model architectures for medical query understanding. In *NAACL*.
Wei Zhu, Xiaoling Wang, Yuan Ni, and Guo Tong Xie.
2021b. Gaml-bert: Improving bert early exiting by gradient aligned mutual learning. In *EMNLP*.
Barret Zoph and Quoc V. Le. 2017. Neural architecture search with reinforcement learning. *ArXiv*,
abs/1611.01578.
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2018. Learning transferable architectures for scalable image recognition. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8697–8710.
## A Demonstrations Of Learned Architectures
In the section, the learned exit architectures on the RTE and SST-2 tasks are presented in Table 6.
Discussions on the observations from the learned architectures can be found in the main content.
| task | layer index | activation 1 | activation 2 | pooler | encoder op 1 | encoder op 2 |
|--------|---------------|----------------|----------------|--------------|----------------|----------------|
| 1 | swish | leaky_relu | avg_pool | conv_3 | null | |
| 2 | gelu | leaky_relu | max_pool | null | mha_4 | |
| 3 | nullAct | swish | max_pool | mha_4 | null | |
| 4 | swish | leaky_relu | cls_pool | conv_3 | null | |
| 5 | swish | gelu | sa_pool | conv_5 | skip-connect | |
| 6 | swish | swish | avg_pool | null | conv_5 | |
| 7 | gelu | swish | max_pool | mha_4 | conv_1 | |
| 8 | nullAct | leaky_relu | max_pool | null | skip-connect | |
| 9 | tanh | gelu | cls_pool | conv_1 | conv_1 | |
| 10 | nullAct | gelu | cls_pool | skip-connect | mha_8 | |
| 11 | nullAct | gelu | avg_pool | conv_3 | null | |
| 12 | gelu | nullAct | cls_pool | conv_3 | skip-connect | |
| SST-2 | 1 | nullAct | tanh | sa_pool | null | conv_1 |
| 2 | swish | nullAct | avg_pool | conv_1 | conv_5 | |
| 3 | gelu | tanh | sa_pool | null | mha_2 | |
| 4 | swish | nullAct | sa_pool | skip-connect | conv_3 | |
| 5 | gelu | nullAct | sa_pool | conv_3 | null | |
| 6 | gelu | tanh | sa_pool | mha_pdot | conv_3 | |
| 7 | nullAct | tanh | sa_pool | conv_3 | null | |
| 8 | leaky_relu | leaky_relu | max_pool | conv_1 | null | |
| 9 | nullAct | swish | max_pool | null | conv_1 | |
| 10 | swish | leaky_relu | max_pool | conv_1 | null | |
| 11 | nullAct | gelu | cls_pool | skip-connect | mha_4 | |
| 12 | nullAct | swish | cls_pool | mha_4 | null | |
| RTE | | | | | | |
|
silva-etal-2023-authorship | Authorship Attribution of Late 19th Century Novels using {GAN}-{BERT} | https://aclanthology.org/2023.acl-srw.44 | Authorship attribution aims to identify the author of an anonymous text. The task becomes even more worthwhile when it comes to literary works. For example, pen names were commonly used by female authors in the 19th century resulting in some literary works being incorrectly attributed or claimed. With this motivation, we collated a dataset of late 19th century novels in English. Due to the imbalance in the dataset and the unavailability of enough data per author, we employed the GANBERT model along with data sampling strategies to fine-tune a transformer-based model for authorship attribution. Differently from the earlier studies on the GAN-BERT model, we conducted transfer learning on comparatively smaller author subsets to train more focused author-specific models yielding performance over 0.88 accuracy and F1 scores. Furthermore, we observed that increasing the sample size has a negative impact on the model{'}s performance. Our research mainly contributes to the ongoing authorship attribution research using GAN-BERT architecture, especially in attributing disputed novelists in the late 19th century. | # Authorship Attribution Of Late 19Th Century Novels Using Gan-Bert
Kanishka Silva University of Wolverhampton United Kingdom [email protected] Frédéric Blain Tilburg University The Netherlands [email protected] Laura Ugolini University of Wolverhampton United Kingdom [email protected]
## Abstract
Authorship attribution aims to identify the author of an anonymous text. The task becomes even more worthwhile when it comes to literary works. For example, pen names were commonly used by female authors in the 19th century resulting in some literary works being incorrectly attributed or claimed. With this motivation, we collated a dataset of late 19thcentury novels in English. Due to the imbalance in the dataset and the unavailability of enough data per author, we employed the GANBERT model along with data sampling strategies to fine-tune a transformer-based model for authorship attribution. Differently from the earlier studies on the GAN-BERT model, we conducted transfer learning on comparatively smaller author subsets to train more focused author-specific models yielding performance over 0.88 accuracy and F1 scores. Furthermore, we observed that increasing the sample size has a negative impact on the model's performance. Our research mainly contributes to the ongoing authorship attribution research using GAN-BERT architecture, especially in attributing disputed novelists in the late 19th century.
## 1 Introduction
Authorship attribution identifies authors of a given set of unknown documents (Hu et al., 2020; Neal et al., 2018; Stamatatos, 2009). Conventional techniques and neural networks are the two main authorship attribution methods. The studies on the conventional approaches typically focus on feature engineering and stylometry. The deep learning approaches have been gaining popularity recently due to the superior results compared to the conventional Burcu Can University of Stirling United Kingdom [email protected] Raheem Sarwar Manchester Metropolitan University United Kingdom [email protected] Ruslan Mitkov Lancaster University United Kingdom [email protected] approaches. Furthermore, authorship attribution can be tackled in two ways: closed-set and openset attribution. In closed-set attribution, an author is selected from a set of candidate authors, whereas in open-set attribution, the target author may not be included in the candidate authors' list.
Applications of authorship attribution are employed in various domains, such as digital forensics
(Abbasi and Chen, 2005; Sun et al., 2012), social media analysis (Junior et al., 2016; Duman et al.,
2016; Brocardo et al., 2017) and digital humanities Juola (2021). In historical texts, the authorship styles may contain socio-linguistic characteristics due to the century in which the author lived, idea movements inspired by the author, and languagespecific attributes. Also, in written texts, the genre and topics are crucial in defining the author's style. Several pieces of research have been undertaken in the literature and historical domains, for instance, identifying anonymous or disputed texts (Koppel et al., 2007; Kestemont et al., 2016; Tuccinardi, 2017). The work presented by Fung (2003) analyses the Federalist Papers, which involves 85 articles and essays written by Alexander Hamilton, James Madison and John Jay. Another application of authorship attribution in literature is resolving doubted authorships. For instance, Thompson and Rasp (2016) investigate whether C.S. Lewis wrote The Dark Towers. The Shakespearean Authorship Dispute was addressed by Fox and Ehmoda (2012). Furthermore, attributing the author is one of many variations in authorship applications, as research directions are in different domains, such as attributing to the publication year and identifying the literary genre and the topic. One such example is 310 Tausz (2011) which predicts the date of authorship in historical texts.
This research proposes a GAN-BERT-based model to enhance transformer-based authorship attribution in late 19th-century novels. To our knowledge, this is the first attempt to ensemble GAN
and BERT models and, precisely, the GAN-BERT
model to address authorship attribution in literary texts. In some of the recent works on authorship attribution, the models were trained in a controlled setting and had less elaboration on the data preparation stage, resulting in the poor reproducibility and generalisation of these models. Here, we present an end-to-end process from domain selection to dataset collection with insights to experiment planning.
An authorship attribution model highly depends on the number of authors represented in the training dataset and the text available per each author.
Most of the related works emphasise controlled training environments. To improve the model's generalisation and ability to perform well on robust scenarios, it should be identified how much the model depends on the number of authors in the training dataset and the amount of text by each author. We use a normalised dataset of 20 novels per author to avoid dataset imbalance. Therefore, to identify how much data provides better model performance, we control the text data sample size drawn from the book text. Therefore, the research questions in this study are as follows:
RQ 1: How to effectively utilise the GAN-BERT
model for authorship attribution?
RQ 2: How does the number of authors in the dataset impact the GAN-BERT performance for authorship attribution?
RQ 3: How does the amount of text data (i.e.
sample size) drawn from each novel affect the GAN-BERT performance for authorship attribution?
The remainder of the paper is organised into several sections: Section 2 demonstrates a brief literature survey. Then Section 3 describes the proposed model's architecture, and Section 4 presents the dataset collection and preparation. Section 5 elaborates on the experiment design, focusing on the research questions, Section 6 summarises the results and findings obtained, and finally, Section 7 involves the concluding remarks and future directions.
## 2 Related Work
Texts vary in terms of topic, sentiment and style.
According to Stamatatos (2009), information about the authors can be extracted from the style of their written documents. The task involves identifying the author from unknown documents, known as authorship attribution, which breaks into two major tasks: Authorship Identification and Authorship Verification. Authorship Identification is identifying a document's author by comparing a set of candidate authors (Stamatatos, 2009). Authorship Identification can be interpreted as a binary classification problem, whereas authorship attribution is a multi-class classification problem. Authorship Verification is a fundamental problem in authorship attribution which focuses on finding whether the considered person wrote one or more documents or not. Authorship Verification is comparatively challenging with less data (Koppel et al., 2011; Luyckx and Daelemans, 2008).
With the popularity of deep neural networks for NLP applications, recent authorship attribution research shares a similar trend. The works of Bagnall (2015a); Hosseinia and Mukherjee (2018);
Boumber et al. (2018) are examples of neural network-based models in authorship attribution.
Additionally, transfer learning also proved to have astonishing results. Zhang et al. (2021) introduce a Deep Authorship Verification using new metrics: DV-distance and DV-projection, which utilise pre-trained language models. Their work highlights the utilisation of pre-trained language models in our approach. Character and n-gram-based CNN (Ruder et al., 2016), Syntax-augmented CNN
(Zhang et al., 2018), and Convolutional Siamese Networks (Saedi and Dras, 2021) are some other authorship attribution models which utilise deep learning techniques. These deep learning-based applications provide valuable insights for our approach to utilising the GAN-BERT model for authorship attribution tasks.
Language Models (LM) used in the authorship tasks can be categorised as n-gram-based and neural network-based (Fourkioti et al., 2019). Ge et al. (2016) used a neural network-based language model. The works of Bagnall (2015b) present a character-level RNN-based LM combining a multiheaded classifier. To address the cross-domain problem, Barlas and Stamatatos (2020) extended Bagnall (2015b)'s works for closed-set authorship attribution by combining a multi-headed LM with a pre-trained LM. According to Barlas and Stamatatos (2020), having a normalised corpus is crucial for the performance of cross-domain authorship attribution. BertAA (Fabien et al., 2020) is the recent fine-tuned form of the pre-trained BERT
model for the authorship attribution task, which presents extensive experiments on various datasets:
Enron Email (Klimt and Yang, 2004), Blog Authorship (Schler et al., 2006) and IMDb (Seroussi et al., 2014). Although pre-trained models have gained popularity and promising results in some authorship tasks, the performance of such models highly depends on the training set.
Generative Adversarial Networks (GAN) are used in authorship-related tasks to prevent adversarial attacks, mainly in the Authorship Obfuscation problem where one's writing style is masked.
Ou et al. (2022) introduce source code authorship verification using GAN models and multi-head attention. A4NT (Shetty et al., 2018) is a GANbased style transformation to perform authorship obfuscation learned from data via adversarial training and sequence-to-sequence LMs. Kazlouski
(2019)presents an LSTM-GAN classifier to recognise imitations generated by the A4NT (Shetty et al., 2018) model. Tang et al. (2019) presents a data augmentation approach to authorship attribution in Weibo text using Wasserstein-GAN to generate samples of the positive class.
The class imbalance problem is hard to avoid in real-world scenarios, particularly in authorship attribution. Stamatatos (2018) introduced a novel strategy to produce synthetic data for the authorship identification task. The approach that Stamatatos
(2018) mentioned is segmenting the training texts into text samples, considering the training size of the class. The works of Eder (2015) highlight how much data is required to identify authors across different languages and genres. The findings in Eder (2015) show that the minimum sample range is 2500-5000, representing the two ends for Latin, English, German, Polish, and Hungarian datasets.
Further experiments by Eder (2017) attempt to identify the minimum sample size by removing text one by one from the training set, which yields that 2000 words sample size is appropriate. Also, Eder (2017)
emphasises that this finding depends strongly on the authors. Hadjadj and Sayoud (2021)propose a hybrid PCA and SMOTE approach of oversampling, which reports outperforming the state-of-theart accuracies. The Stylometric Set Similarity (S3)
method presents the authorship attribution task as a set similarity problem by considering 3000 novels from 500 authors curated from Project Gutenberg (Sarwar et al., 2018). Granichin et al. (2015)
present a KNN-resampling approach to authorship identification by simulating samples from 2 texts.
In previous research on authorship attribution, the combination of GAN and transformer models has not yet been explored. Furthermore, to the best of our knowledge, no attempt has been made to use the GAN-BERT model specifically for the task of authorship attribution, especially with sampling strategies for many authors and limited data. The critical literature analysis suggests that deep neural networks in authorship attribution would show promising performance with well-designed sampling strategies. Here, we propose GAN-BERT
model for authorship attribution along with various sampling strategies, and analyse how transferlearning would support the proposed model in literary domain.
## 3 Gan-Bert Model For Authorship Attribution
Let A be a collection of authors of interest, A =
{a1, a2*, . . . , a*N }, where N is the total number of authors in A. The document set belonging to each author forms the complete dataset T =
{ta1
, ta2
, . . . taN} where tai is the document set attributed to the author aiin the dataset. Given a text, tu of an unknown author u, the proposed model assigns the text to the most likely author from A.
GAN-BERT (Croce et al., 2020) combines BERT-based models and Semi-Supervised GAN
(Salimans et al., 2016). Figure 1a illustrates the GAN-BERT model architecture, where discriminator D is utilised to classify examples and generator G generates fake examples F. The discriminator takes the vector representations returned via BERT
for unlabeled U and labelled L input texts. When training is complete, G is discarded from the model to use the rest of the model for inference.
In contrast to GAN-BERT (Croce et al., 2020),
which utilises a semi-supervised GAN model (Salimans et al., 2016) with labelled and unlabeled data, we train the GAN-BERT model with labelled data only. The discriminator D is trained over N+1 classes to assign the true samples to a class from
{1, 2, 3*, ..., N*}. The fake sample generated from the generator G represents the (N + 1)th class. The discriminator is suitable for detecting authorship obfuscation and forgery since it is trained with fake samples similar to the original author-written texts.
Figure 1b illustrates the modified GAN model.
The GAN-BERT model generally shows superior results for classification tasks with limited labelled data. Furthermore, the intuition to use GANBERT for authorship attribution is that, due to the fake data generated in the generator, it considers not only the real writing styles, but also the possible fake writing styles that are synthesised.
## 4 Creating The Datasets 4.1 Pre-Screening Authors
We performed pre-screening on the authors before collecting the dataset, which is, to the best of our knowledge, the first attempt to perform a qualitative analysis on the literary domain for authorship attribution. We considered two parameters during the author selection process: distribution and filtering.
Distribution parameters ensure that the collected texts span equally among different attributes such as gender, genre and ethnicity. Filtering parameters focus on whether selected works by the distribution parameters should be included or excluded from the dataset. It mainly concerns the novelists' characteristics and the nature of their literary works. A
summary of these two parameters is illustrated in Table 1.
## 4.2 Dataset Collection And Validation
We collected datasets from Project Gutenberg across genres such as novels, short stories, essays, poems and biographies. There is no specific field in Project Gutenberg to indicate genre and year of publication. We manually validated texts to capture the year of publication. We also filtered novels so that all fiction had a word count greater than 10,000. To our knowledge, other researchers using Project Gutenberg have not performed similar data validation to filter novels.
In the master dataset, we have filtered 1232 novels written by 62 authors, which are segmented as follows:
** **Either $\,1900\,$**
1. Early 19th Century (1800-1835)
$$2.{\mathrm{~Mid-19th~Century~}}(1836\text{-}1870)$$
3. Late 19th Century (1871-1900)
4. Early 20th Century (1901-1914)
This paper focuses on the late 19th-century segment from the master dataset, which includes 541 novels. We filtered authors based on the number of novels available in the dataset and selected those with at least 20. We narrowed the author selection by selecting the top 20 authors with the most novels from this focused subset. These authors were used to train and test the proposed GAN-BERT
model. Therefore the dataset is thus uniformly distributed regarding the number of novels per author.
The selected authors are Anthony Trollope, Arthur Conan Doyle, Bret Harte, Fergus Hume, Frances Hodgson Burnett, H.G. Wells, Henry Rider Haggard, Jack London, James Grant, John Kendrick Bangs, Joseph Conrad, Louisa May Alcott, Margaret Oliphant, Marie Corelli, Mark Twain, Mary Elizabeth Braddon, Mrs Henry Wood, Nathaniel Hawthorne, Oliver Optic, and Wilkie Collins.
## 4.3 Balanced Author Representation
The filtered dataset of late 19th century English novels consists of 400 novels by 20 authors. Especially in deep neural networks, this dataset is insufficient to represent a larger number of authors than 20. Furthermore, as authors have different writing styles, different combinations of authors in the same size dataset have a strong impact on model performance. We observed this problem during the preliminary experiments with manually sampled sets of authors. Therefore, to ensure a balanced representation of authors in the training and validation datasets and to mitigate the effect of different author combinations, we performed random sampling for a considered number, as shown in Figure 2. Different author combinations are denoted by a 'sample set'.
Furthermore, one of the aims of the experiments is to see how increasing the number of authors would affect the model's performance. To do this, we split the dataset to represent different numbers of authors.
## 4.4 Dataset Splits
We followed the leave-n-out method to split the dataset for manually selected 5 sets. For example, of 20 authors, two were assigned as a 2-author case, while the rest of the 18 were included as an 18-author case. This process is repeated to obtain distinct 5 manually selected author sample sets.
The author's case defines how many authors were considered in the train/test datasets. For example, a 2-author case means a focused dataset with only
(a) GAN-BERT Model (Croce et al., 2020) (b) Modified GAN-BERT Model
![4_image_0.png](4_image_0.png)
Figure 1: Model Architecture Comparison
| Parameter Type | Category | Condition |
|--------------------------------------------|-----------------------------------------------------|----------------------------------------------|
| Distribution Parameters | Genre | Romance, Thrillers, Science Fiction, Realist |
| Gender | Male, Female | |
| Ethnicity | American, British | |
| Doubted Authorship | Only original works by novelist in the training set | |
| Readers | Adult, Children | |
| Filtering Parameters | Publication Period | Later 19th Century 1871-1900 |
| Number of novels during publication period | >3 | |
| Literature Genre | Novels | |
| The number of total novels | >20 | |
| Written Language | English | |
| Non-translation | Yes | |
| Multi-Authors | No | |
| Digitised work availability | Available on the Project Gutenberg | |
novels by 2 authors. We can define any number of author sample sets to perform experiments in each n-author case. For example, manually selected author sample sets for a 2-author case include 5 different combinations of 2 authors out of 20 can be present. 50 random samples in a 2-authors case mean, out of 20 authors, 50 randomised different 2-author combinations. Random sampling does not cover all combinations of authors in a given author case, but would ensure that the majority of author combinations are considered. The dataset splitting process is illustrated in Figure 2.
We ensured the dataset splits were distinct for all the sample sets per case. The 20-author case was used as the base model to train and perform transfer learning on other models. We used a randomised approach to shuffle and return 50 and 100-author sample sets for a random sample generation.
We split train-test-validation (80:10:10) sets, stratified by author ids, for each sample set considered for the experiments, with one sample set per experimental round. The average results of all sample sets represent a particular n-author case.
The base model was trained on all 20 authors in the transfer learning experiments. The stratified split in the train-test-validation ensured a uniform distribution of novels per author, and the test data are distinct from the training data. In transfer learning, the training set may include evaluation data from the 20-author case.
## 4.5 Baseline Datasets
To compare the performance of the proposed GANBERT model on other baseline datasets, we used the IMDB62 (Seroussi et al., 2014) and Blog Authorship (Schler et al., 2006) datasets. We created a subset of 20 authored content from these datasets to be consistent with the 20-author dataset, which refers to as IMDB20 and Blog20 respectively.
## 4.6 Dataset Availability
Due to the copyright restrictions explained in Section 7, we do not release the entire dataset. Instead, we release the scripts used for creating and preprocessing the dataset. We also publish the list of the authors, selected novels, and novel indices used to extract the sample sets 1.
## 5 Experiment Design
We conducted experiments on different dataset subsets and different model configurations to address the following:
1https://github.com/Kaniz92/AA-GAN-Bert/tree/
main
![5_image_0.png](5_image_0.png)
1. Random Sampling Author Combinations 2. The Impact of Transfer Learning 3. Number of Authors in Dataset
## 4. Text Sample Size Per Novel
We explored the GAN-BERT model under two dimensions: Random Sampling and Transfer Learning. As illustrated in Figure 2, the 20 novels per each author from the 20-author dataset provide different combinations under different numbers of authors. Therefore, first, we manually selected authors per each n-author case and then randomly sampled 50 and 100 author combinations. In transfer learning experiments, we compared the performance of manually selected sample sets under standalone training and transfer learning from the 20-author dataset to each n-author case.
In a practical scenario of authorship attribution, the number of authors to compare would vary.
Therefore, we experimented with the GAN-BERT
model response for different numbers of authors in the dataset. Also, the text sample size drawn from a novel can be varied when representing the novel text due to varying text lengths. We used the manual sampling of authors to identify any trend towards the text sample size drawn from a novel.
In the default setting, unless specified, we used 20 samples per novel drawn sequentially from the book text for training and testing. We first trained the base model on 20-authors for 10 epochs, using Adam optimiser, one hidden layer for both generator and the discriminator, a dropout rate of 0.2, batch size of 8, a warm-up proportion of 0.1, and learning rate of 1e−5 for both generator and the discriminator. Then the pre-trained 20-author model was used for transfer learning on smaller subsets of each case in {2, 4, 6, 8, 10, 12, 14, 16, 18}-author counts and trained further on these sub-sets for 5 epochs.
We compared the proposed GAN-BERT model with different baseline models such as word-level TF-IDF, character n-gram, Stylometric features (Sari et al., 2018) and BertAA (Fabien et al.,
2020) on the 20-authors dataset, 18-authors dataset, IMDB, and Blog Authorship datasets. These baseline experiments provide insights into how the created datasets performed with other baseline models and how other datasets would perform with the proposed GAN-BERT model. To be consistent with the rest of the experiments, we selected 20 samples per each document by an author, but the 20-sample restrictions are not applied to baseline models.
## 6 Results And Discussion
For each experiment across different sample sets, we reported Accuracy, F1, Precision, and Recall with averaging results sampled manually and randomly.
## 6.1 Random Sampling Author Combinations
Analysing the model with manually selected author sample sets may fail to describe the results and any trends due to the bias factors. For example, the upshot performance of the 18-authors model in manually sampled authors as in Figure 3a could be due to biases in generated manual sample sets. Therefore we conducted additional experiments for the 50 and 100 sample sets using random sampling. Rather than selecting books randomly, we focused on arranging authors into different sample sets and then keeping books per each author the same (20 books per author). This experiment explores whether the model could tolerate the robustness of any author combinations. Before deciding on the random sampling limits, we analysed the maximum number of author combinations per each case. To cover all the author cases, the maximum random sampling count is 190, so we decided to experiment on 50 and 100 random samples.
Compared to the manually selected author sample sets, 50 and 100 random sampling achieves a higher accuracy for all the author cases, precisely more than 0.97% of accuracy. Results in Table 2 and Figure 3b show that the model is robust with consistent performance over different author cases.
## 6.2 The Impact Of Transfer Learning
The intuition behind applying transfer learning for the authorship attribution model is that instead of having a model that learns each author's style and overfits into a particular dataset with a fixed number of authors, it makes the model more practical to use in real-world scenarios if the model learns the authorship attribution task regardless of the number of authors. This also applies to different author styles, regardless of topic, genre or unique author style. Moreover, transfer learning allows the model to transfer knowledge into a limited data set.
Extensive experiments have been carried out to identify how transfer learning has affected the model's performance from the 20-author cases to smaller author subsets. We trained standalone and transfer learning models using the same hyperparameters as the base model.
Transfer learning has substantially improved the model's performance, especially for the increasing number of authors. The best-performing model was observed for the 2-author case, and the worstperforming model was for the 18-author case. Overall, the transfer learning results suggest that it is a promising technique for improving performance, especially for smaller datasets.
## 6.3 Incremental Number Of Authors In The Dataset
We designed the dataset subsets to increment the number of authors by two, ranging from [2, 18], to investigate how the author count would affect the model's performance. The number of samples per author is uniform across each author sample set and case. We also selected the same 20 books for each author to ensure that the topics or genres do not affect the experiments. One text sample should not exceed 512 words, BERT's maximum input token size. Therefore we set the one sample size as 512 words and drew 20 sequential text samples from each book, representing one author by 400 (20 x 20) instances before the train-test split.
Both the standalone and transfer learning models for five manually selected author sample sets show a declining trend in performance as the number of authors increases, as illustrated in Table 3 and Table 2. The binary classification shows the best performance overall, while the multi-class classification shows comparatively a lower performance.
Averaging accuracies for transfer learning for 50 and 100 randomly sampled author sets are illustrated in Table 2. The results do not indicate any clear trend with the author counts, but accuracy and F1 are consistent and higher than manually selected author sample sets.
As illustrated in Figure 3b, manual samples and random samples show clear distinction with increasing the number of authors in the dataset.
Therefore, the model performance depends highly on how the sample sets were defined, i.e. different author combinations. Therefore, strategies must be explored to overcome the biases towards different configurations of authors' sample sets.
n-Authors 5 Manual Samples 50 Random Samples 100 Random Samples
Accuracy F1 Precision Recall Accuracy F1 Precision Recall Accuracy F1 Precision Recall
2-authors 0.98† 0.98† 0.99† 0.98† 0.98 0.98 0.98 0.98 0.98 0.98 0.99 0.98 4-authors 0.96 0.96 0.96 0.96 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98
6-authors 0.93 0.93 0.93 0.93 0.98 0.98 0.98 0.98 0.97∗ 0.97∗ 0.97∗ 0.97∗ 8-authors 0.91 0.91 0.92 0.91 0.96∗ 0.96∗ 0.97∗ 0.96∗ 0.98 0.98 0.98 0.98 10-authors 0.92 0.92 0.92 0.92 0.98 0.98 0.98 0.98 0.99† 0.99† 0.99† 0.99† 12-authors 0.92 0.92 0.93 0.92 0.99† 0.99† 0.99† 0.99† 0.99† 0.99† 0.99† 0.99†
14-authors 0.88∗ 0.88∗ 0.90∗ 0.88∗ 0.98 0.98 0.98 0.98 0.99† 0.99† 0.99† 0.99†
16-authors 0.89 0.89 0.90∗ 0.89 0.98 0.98 0.98 0.98 0.99† 0.99† 0.99† 0.99† 18-authors 0.88∗ 0.88∗ 0.90∗ 0.88∗ 0.99† 0.99† 0.99† 0.99† 0.98 0.98 0.98 0.98
![7_image_0.png](7_image_0.png)
n-Authors Accuracy F1 Precision Recall
2-authors 0.95† 0.96† 0.95† 0.95† 4-authors 0.82 0.85 0.82 0.82
6-authors 0.82 0.84 0.82 0.83 8-authors 0.76 0.78 0.76 0.75 10-authors 0.72 0.75 0.72 0.72 12-authors 0.70 0.74 0.70 0.70 14-authors 0.66 0.70 0.66 0.66 16-authors 0.64∗ 0.67∗ 0.64∗ 0.64∗
18-authors 0.80 0.82 0.80 0.80
drawn from the book text. For example, a text sample size of 5 means that we selected 5 x 512 text chunks from the book text, which resulting 5 separate instances in the dataset. We performed this experiment using the same 20 books per author.
The results in Table 4 demonstrate that increasing the sample size has a negative impact on the model's performance across all sample sets for the 18-author model. In this experiment, as the sample size increases, the model is trained on the same novels and 18 authors during training. One of the main findings is that the larger text samples from novels only sometimes lead to better performance.
The model may have shown a negative impact in larger text sample sizes due to the high variance in the data or overfitting. Hence, further investigation must be performed to identify the optimal text sample size per novel under different experiment settings.
## 6.4 Text Sample Size Per Novel
To investigate how each novel's sample size affects the model performance, we selected the 18 authors' cases and experimented across different text sample sizes ranging from 5 to 35 text chunks per novel.
Each sample consists of a text chunk of 512 words
Sample Size Accuracy F1 Precision Recall 5 0.92† 0.93† 0.92† 0.92† 10 0.91 0.91 0.91 0.91 15 0.89 0.90 0.89 0.89
20 0.80∗ 0.82∗ 0.80∗ 0.80∗
25 0.86 0.87 0.86 0.86
30 0.86 0.87 0.86 0.86
## 6.5 Baseline Experiments
We evaluated various baseline models with different datasets including IMDB20, Blog20, 20authors and 18-authors. The accuracy results obtained are reported in Table 5. Using stylometric features performed the worst with an accuracy of 0.14 on the IMDB20 dataset. The proposed GANBERT model outperforms the stylometric and character n-gram-based models but does not perform as well as the TF-IDF and BertAA models. Our proposed model performs as well as the other models on IMDB20 dataset; however, BERTAA outperforms the others on our dataset. This indicates that further improvements (e.g. including other features such as tf-idf or stylometric features) are needed to enhance the proposed GAN-BERT model performance on specific datasets.
Model IMDB20 Blog20 20-authors 18-authors Stylometric (Sari et al., 2018) 0.14∗ 0.11∗ 0.14∗ 0.11∗ Character Ngram (Fabien et al., 2020) 0.69 0.23 0.94 0.95
Word level TF-IDF (Fabien et al., 2020) 0.97† 0.47 0.91 0.90
BERTAA (Fabien et al., 2020) 0.97† 0.62† 0.99† 0.99†
Proposed Model 0.96 0.40 0.63 0.80
Table 5: Baseline Experiment Results
∗- mini result across a metric †- max value across a metric
## 7 Conclusion
This research proposes a GAN-BERT-based model for authorship attribution in late-19th-century novels. Our primary focus is identifying how the author counts and the text sample size per book affects the model's performance. The manually selected five authors' combinations indicate that the model's performance degrades when the number of authors increases. The declining trend is the same for transfer-learning models, although the overall performance is better than the standalone models. Additionally, we experimented with how transfer learning has improved the mean accuracies over manually selected author sample sets for each n-author case. A future improvement would be an experiment around few-shot and zero-shot tests. Furthermore, it would be interesting to experiment with different GAN and transformer models replaced in this model architecture.
## Limitations
While this research provides valuable insights into using the GAN-BERT model for authorship attribution, there are also a few limitations to note. We only focused on a limited number of authors from the late 19th century, which may include shortcomings towards model generalisability. Future research should consider using the whole dataset of long 19th-century novelists to address this limitation. Due to the copyright issues explained in Section 4.6 and Section 7, we do not release the whole dataset, instead, we release scripts to reproduce the datasets. Furthermore, incorporating a rich feature set and comparing performance among different models would be another interesting research direction.
## Ethics Statement
The duration 1800-1914 is considered as the outof-copyright duration in Project Gutenberg, under the categories 'Rule 1: Works First Published Before 95 Years Ago and Before 1977' and 'Rule 10(c) - Works of Treaty Parties and Proclamation Countries First Published Between 1923 and 1977' (Gutenberg). Although the duration is outof-copyright regarding literary works, we stored the data securely with restricted access. We do not release the dataset.
## References
A. Abbasi and Hsinchun Chen. 2005. Applying authorship analysis to extremist-group web forum messages.
IEEE Intelligent Systems, 20:67–75.
Douglas Bagnall. 2015a. Author identification using multi-headed recurrent neural networks. *ArXiv*,
abs/1506.04891.
Douglas Bagnall. 2015b. Author identification using multi-headed recurrent neural networks. In Working Notes of CLEF 2015 - Conference and Labs of the Evaluation forum, Toulouse, France, September 8-11, 2015, volume 1391 of *CEUR Workshop Proceedings*.
CEUR-WS.org.
Georgios Barlas and Efstathios Stamatatos. 2020. Crossdomain authorship attribution using pre-trained language models. In *Artificial Intelligence Applications* and Innovations - 16th IFIP WG 12.5 International Conference, AIAI 2020, Neos Marmaras, Greece, June 5-7, 2020, Proceedings, Part I, volume 583 of IFIP Advances in Information and Communication Technology, pages 255–266. Springer.
Dainis Boumber, Yifan Zhang, and Arjun Mukherjee. 2018. Experiments with convolutional neural networks for multi-label authorship attribution. In LREC.
Marcelo Luiz Brocardo, Issa Traoré, Isaac Woungang, and Mohammad S. Obaidat. 2017. Authorship verification using deep belief network systems. Int. J.
Commun. Syst., 30.
Danilo Croce, Giuseppe Castellucci, and Roberto Basili.
2020. GAN-BERT: generative adversarial learning for robust text classification with a bunch of labeled examples. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,*
ACL 2020, Online, July 5-10, 2020, pages 2114–2119.
Association for Computational Linguistics.
Sevtap Duman, Kubra Kalkan-Cakmakci, Manuel Egele, William K. Robertson, and Engin Kirda. 2016.
Emailprofiler: Spearphishing filtering with header and stylometric features of emails. 2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC), 1:408–416.
Maciej Eder. 2015. Does size matter? authorship attribution, small samples, big problem. *Digit. Scholarsh.*
Humanit., 30(2):167–182.
Maciej Eder. 2017. Short samples in authorship attribution: A new approach. In 12th Annual International Conference of the Alliance of Digital Humanities Organizations, DH 2017, Montréal, Canada, August 8-11, 2017, Conference Abstracts. Alliance of Digital Humanities Organizations (ADHO).
Maël Fabien, Esaú Villatoro-Tello, Petr Motlícek, and Shantipriya Parida. 2020. Bertaa : BERT fine-tuning for authorship attribution. In Proceedings of the 17th International Conference on Natural Language Processing, ICON 2020, Indian Institute of Technology Patna, Patna, India, December 18-21, 2020, pages 127–137. NLP Association of India (NLPAI).
Olga Fourkioti, Symeon Symeonidis, and Avi Arampatzis. 2019. Language models and fusion for authorship attribution. *Inf. Process. Manag.*, 56(6).
Neal P. Fox and Omran Ehmoda. 2012. Statistical stylometrics and the marlowe-shakespeare authorship debate.
Glenn Fung. 2003. The disputed federalist papers: SVM
feature selection via concave minimization. In *Proceedings of the Richard Tapia Celebration of Diversity in Computing Conference 2003, Atlanta, Georgia,*
USA, October 15-18, 2003, pages 42–46. ACM.
Zhenhao Ge, Yufang Sun, and Mark J. T. Smith. 2016.
Authorship attribution using a neural network language model. In Proceedings of the Thirtieth AAAI
Conference on Artificial Intelligence, February 1217, 2016, Phoenix, Arizona, USA, pages 4212–4213.
AAAI Press.
Oleg Granichin, Lev Klebanov, Dmitry Shalymov, and Zeev Volkovich. 2015. Authorship attribution method based on knn re-sampling approach. In PROCEEDINGS ELMAR-INTERNATIONAL SYMPOSIUM ELECTRONICS IN MARINE. Institute of Electrical and Electronics Engineers Inc.
Project Gutenberg. *Copyright How-To*. https://www.
gutenberg.org/help/copyright.html.
Hassina Hadjadj and Halim Sayoud. 2021. Arabic authorship attribution using synthetic minority oversampling technique and principal components analysis for imbalanced documents. *Int. J. Cogn. Informatics Nat. Intell.*, 15(4):1–17.
Marjan Hosseinia and Arjun Mukherjee. 2018. Experiments with neural networks for small and large scale authorship verification. *ArXiv*, abs/1803.06456.
Zhiqiang Hu, Roy Ka-Wei Lee, Lei Wang, Ee-Peng Lim, and Bo Dai. 2020. Deepstyle: User style embedding for authorship attribution of short texts. In Web and Big Data - 4th International Joint Conference, APWeb-WAIM 2020, Tianjin, China, September 18-20, 2020, Proceedings, Part II, volume 12318 of Lecture Notes in Computer Science, pages 221–229.
Springer.
Sylvio Barbon Junior, Rodrigo Augusto Igawa, and Bruno Bogaz Zarpelão. 2016. Authorship verification applied to detection of compromised accounts on online social networks. Multimedia Tools and Applications, 76:3213–3233.
Patrick Juola. 2021. Verifying authorship for forensic purposes: A computational protocol and its validation. *Forensic Science International*, 325:110824.
Andrei Kazlouski. 2019. Text style imitation to prevent author identification and profiling. Master's thesis, Aalto University. School of Science.
Mike Kestemont, Justin Anthony Stover, Moshe Koppel, Folgert Karsdorp, and Walter Daelemans. 2016.
Authenticating the writings of julius caesar. *Expert* Syst. Appl., 63:86–96.
Bryan Klimt and Yiming Yang. 2004. The enron corpus:
A new dataset for email classification research. In Machine Learning: ECML 2004, 15th European Conference on Machine Learning, Pisa, Italy, September 20-24, 2004, Proceedings, volume 3201 of Lecture Notes in Computer Science, pages 217–226. Springer.
Moshe Koppel, Jonathan Schler, and Shlomo Engelson Argamon. 2011. Authorship attribution in the wild.
Language Resources and Evaluation, 45:83–94.
Moshe Koppel, Jonathan Schler, and Elisheva BonchekDokow. 2007. Measuring differentiability: Unmasking pseudonymous authors. *J. Mach. Learn. Res.*,
8:1261–1276.
Kim Luyckx and Walter Daelemans. 2008. Authorship attribution and verification with many authors and limited data. In *COLING*.
Tempestt J. Neal, Kalaivani Sundararajan, Aneez Fatima, Yiming Yan, Yingfei Xiang, and Damon L.
Woodard. 2018. Surveying stylometry techniques and applications. *ACM Comput. Surv.*, 50(6):86:1–
86:36.
Weihan Ou, Steven H.H. Ding, Yuan Tian, and Leo Song. 2022. Scs-gan: Learning functionalityagnostic stylometric representations for source code authorship verification. *IEEE Transactions on Software Engineering*, pages 1–1.
Sebastian Ruder, Parsa Ghaffari, and John G. Breslin.
2016. Character-level and multi-channel convolutional neural networks for large-scale authorship attribution. *CoRR*, abs/1609.06686.
Chakaveh Saedi and Mark Dras. 2021. Siamese networks for large-scale author identification. *Comput.*
Speech Lang., 70:101241.
Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In *Advances in* Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2226–2234.
Yunita Sari, Mark Stevenson, and Andreas Vlachos.
2018. Topic or style? exploring the most useful features for authorship attribution. In *Proceedings of the* 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA,
August 20-26, 2018, pages 343–353. Association for Computational Linguistics.
Raheem Sarwar, Chenyun Yu, Ninad Tungare, Kanatip Chitavisutthivong, Sukrit Sriratanawilai, Yaohai Xu, Dickson Chow, Thanawin Rakthanmanon, and Sarana Nutanong. 2018. An effective and scalable framework for authorship attribution query processing. *IEEE Access*, 6:50030–50048.
Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W. Pennebaker. 2006. Effects of age and gender on blogging. In *Computational Approaches* to Analyzing Weblogs, Papers from the 2006 AAAI
Spring Symposium, Technical Report SS-06-03, Stanford, California, USA, March 27-29, 2006, pages 199–205. AAAI.
Yanir Seroussi, Ingrid Zukerman, and Fabian Bohnert. 2014. Authorship attribution with topic models.
Comput. Linguistics, 40(2):269–310.
Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2018.
A4NT: author attribute anonymity by adversarial training of neural machine translation. In 27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, August 15-17, 2018, pages 1633–1650. USENIX Association.
Efstathios Stamatatos. 2009. A survey of modern authorship attribution methods. *J. Assoc. Inf. Sci. Technol.*, 60(3):538–556.
Efstathios Stamatatos. 2018. Masking topic-related information to enhance authorship attribution. *J. Assoc.*
Inf. Sci. Technol., 69(3):461–473.
Jianwen Sun, Zongkai Yang, Sanya Liu, and Pei Wang.
2012. Applying stylometric analysis techniques to counter anonymity in cyberspace. *J. Networks*,
7:259–266.
Wanbing Tang, Chunhua Wu, Xiaolong Chen, Yudao Sun, and Chen Li. 2019. Weibo authorship identification based on wasserstein generative adversarial networks. In 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP),
pages 1–5.
Andrew Tausz. 2011. Predicting the date of authorship of historical texts.
Jeffrey R. Thompson and John Rasp. 2016. Did c.
s. lewis write the dark tower?: An examination of the small-sample properties of the thisted-efron tests of authorship. *Austrian Journal of Statistics*,
38(2):71–82.
Enrico Tuccinardi. 2017. An application of a profilebased method for authorship verification: Investigating the authenticity of pliny the younger's letter to trajan concerning the christians. Digit. Scholarsh.
Humanit., 32:435–447.
Richong Zhang, Zhiyuan Hu, Hongyu Guo, and Yongyi Mao. 2018. Syntax encoding with application in authorship attribution. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -
November 4, 2018, pages 2742–2753. Association for Computational Linguistics.
Yifan Zhang, Dainis Boumber, Marjan Hosseinia, Fan Yang, and Arjun Mukherjee. 2021. Improving authorship verification using linguistic divergence. In ROMCIR@ECIR. |
fanton-etal-2023-guides | How-to Guides for Specific Audiences: A Corpus and Initial Findings | https://aclanthology.org/2023.acl-srw.46 | Instructional texts for specific target groups should ideally take into account the prior knowledge and needs of the readers in order to guide them efficiently to their desired goals. However, targeting specific groups also carries the risk of reflecting disparate social norms and subtle stereotypes. In this paper, we investigate the extent to which how-to guides from one particular platform, wikiHow, differ in practice depending on the intended audience. We conduct two case studies in which we examine qualitative features of texts written for specific audiences. In a generalization study, we investigate which differences can also be systematically demonstrated using computational methods. The results of our studies show that guides from wikiHow, like other text genres, are subject to subtle biases. We aim to raise awareness of these inequalities as a first step to addressing them in future work. | # How-To Guides For Specific Audiences: A Corpus And Initial Findings
Nicola Fanton he/they Agnieszka Falenska University of Stuttgart Institute for Natural Language Processing
{firstname.lastname}@ims.uni-stuttgart.de Michael Roth
## Abstract
Instructional texts for specific target groups should ideally take into account the prior knowledge and needs of the readers in order to guide them efficiently to their desired goals.
However, targeting specific groups also carries the risk of reflecting disparate social norms and subtle stereotypes. In this paper, we investigate the extent to which how-to guides from one particular platform, wikiHow, differ in practice depending on the intended audience. We conduct two case studies in which we examine qualitative features of texts written for specific audiences. In a generalization study, we investigate which differences can also be systematically demonstrated using computational methods. The results of our studies show that guides from wikiHow, like other text genres, are subject to subtle biases. We aim to raise awareness of these inequalities as a first step to addressing them in future work.
## 1 Introduction
How-to guides provide practical instructions that help humans to achieve specific goals. In the past decades, such guides also attracted increasing interest in NLP and AI research (Branavan et al., 2009; Chu et al., 2017; Anthonio et al., 2020). Resources such as wikiHow,1a collaboratively edited online platform for instructional texts, make it possible to scale research efforts to hundreds of thousands of articles. By covering an ever-increasing number of guides, including niche topics and articles for minority groups, there is also an increasing risk of perpetuating stereotypes and jeopardizing general accessibility. In fact, we notice that wikiHow already contains articles written for specific target groups as well as articles that exist in different versions for different audiences. As an example, Table 1 shows two articles with the same *title*, "Act Like a Kid Again", one with the *indicator* '(Girls)'
and one with '(Boys)'.
1www.wikihow.com Act Like a Kid Again (Girls)
Eat well and exercise, but don't obsess about your body. Be healthy without stressing too much about it. (. . . ) Generally, **go for lots**
of fruits and veggies. And even though kids love sugar, **don't eat too much** of it!
Act Like a Kid Again (Boys)
Eat your childhood favorite food. Recollect every snack, chocolates, ice cream, candy bars, cotton candy and **everything that you**
loved as a kid or would make you feel pampered. Eat as per your capacity as too much at once may make you feel uncomfortable.
Table 1: Two versions of the same guide in wikiHow.
Among other things, we find that such articles dramatically differ in terms of details. For example, the texts highlighted in Table 1 vary in how much they focus on issues potentially related to body images. As such, the articles reflect disparate standards, which ultimately may contribute to discrimination (Prentice and Carranza, 2002). The specific example can also be linked to observations of gender differences in weight concerns from psychology (Dougherty et al., 2022), which might represent a reason for *disparate treatment*. On the surface, it is not always possible to say exactly why there are certain differences in articles for specific audiences. However, through qualitative and quantitative comparisons on the linguistic level, we can at least determine what types of differences are present and to what extent they can be systematically identified. In this sense, we aim to contribute to questions about biases and fairness in data and, at the same time, connect to related research in psychology and other social sciences.
There already exists a large body of research that examines biases and stereotypes in NLP data and, likewise, how-to guides from wikiHow have been used as training material for a variety of language processing tasks (§2). However, previous studies have not explicitly looked into issues related to bias in the wikiHow data. As a first step towards addressing this gap, we create our own sub-corpora of how-to guides, which let us investigate differences across articles for specific target groups (§3).
We perform two case studies and a generalization study on our collected data: In the first study, we identify a number of articles that exist in multiple variants for different target groups and examine them in terms of distinctive content and linguistic characteristics (§4). As a second case study, we explicitly examine how far topics covered for specific target groups differ from each other (§5). Finally, we investigate whether the qualitative findings from our case studies can be validated quantitatively and generalized to our whole corpus using computational modeling (§6).
In summary, we find systematic differences between articles for specific groups in terms of topic, style, and content. We conclude the paper with a discussion of these findings and point out links to existing work in the social sciences (§7).
## 2 Related Work
We summarize existing work on the three strains of research that this paper builds on: wikiHow as a data source (§2.1), subtle biases in datasets (§2.2), as well as understanding the characteristics of texts that target specific audiences (§2.3).
## 2.1 Wikihow As A Data Source
wikiHow is a prominent data source for a variety of tasks, including summarization (Koupaee and Wang, 2018), goal-step inference (Zhang et al.,
2020), and question answering (Cai et al., 2022).
By exploiting the revision history of wikiHow, Anthonio et al. (2020) created **wikiHowToImprove**,
which has been used to better understand phenomena related to the (re-)writing process of how-to guides (Roth and Anthonio, 2021; Anthonio et al.,
2022). Writing, but especially revising, instructions should presumedly take into account the readers' context, perspective and knowledge about the domain and the world. The need for clarification stands prominently out as a main purpose of the refinements of wikiHow guides (Bhat et al., 2020). It has been shown that while annotators tend to agree that "revised means better", the disagreements can be caused by differences in common knowledge and intuitions (Anthonio and Roth, 2020). As specific phenomena, previous work studied implicit references and lexical vagueness (Anthonio and Roth, 2021; Debnath and Roth, 2021). However, none of the aforementioned studies accounted for audience-specific differences. This work takes a first step to close this gap.
## 2.2 Subtle Biases In Datasets
Diagnosing the presence of biases in data is one of the crucial steps in diminishing the spread of harmful stereotypes. This work contributes to the research on *subtle biases*, i.e., textual patterns that implicitly reflect societal power asymmetries. Such biases are embeded in specific linguistic phenomena (e.g., masculine generics; Swim et al., 2004)
or in inequalities in how people from different demographic groups are represented (e.g., emphasizing the romantic relationships in the bibliographies of women; Wagner et al., 2015). Moreover, they can be frequent even in domains where blatant stereotypes and openly expressing beliefs about social hierarchies is generally considered inappropriate (Cervone et al., 2021). For example, there is a long line of work analyzing subtle stereotypes in Wikipedia (Callahan and Herring, 2011; Reagle and Rhue, 2011; Konieczny and Klein, 2018; Schmahl et al., 2020, among others), where the lack of diversity represents an issue already at the level of the editors' community (Lam et al., 2011). Beyond notability for representation itself, linguistic aspects in Wikipedia show a remarkable disparity concerning biographies of men and women, both in terms of topics and polarity of abstract terminology (Wagner et al., 2016). Such inequalities do not pertain only to biographies but find systemic correspondence in all domains and across languages
(Falenska and Çetinoglu ˘ , 2021).
To the best of our knowledge, the presence of subtle stereotypes in wikiHow has not yet been investigated. However, the guides from this platform are a valuable entry point for studying bias, as they are produced by a community of contributors and by experts2suggesting how to perform activities.
In other words, given the different purposes of the platforms, while Wikipedia data is rather descriptive, wikiHow data features instructional texts that potentially differ depending on the audience.
## 2.3 Different Audiences
The mind of the readers features a priori goals that affect the understanding of written texts (Fum et al.,
1986). However, the goals and knowledge of different (groups of) people may vary. An example of work that considers different readers' expertise regards title generation (Senda and Shinohara, 2002).
In that work, less expert readers were found to be tentatively more influenced by effective titles. Consequently, a system for revising titles accounting for the readers' expertise has been proposed (Senda et al., 2004). As such, that contribution indicates the importance of considering the target audience for efficient communication. Additionally, different audiences can understand to different extents technical terminology (Senda et al., 2006; Elhadad and Sutaria, 2007) and causation (Siddharthan and Katsos, 2010). Previous contributions accounted for different target groups also in the controllable text generation tasks of paraphrasing (Kajiwara et al., 2013), text simplification (Scarton and Specia, 2018; Sheang and Saggion, 2021), machine translation (Agrawal and Carpuat, 2019), and dictionary examples generation (He and Yiu, 2022).
## 3 Corpus Construction
As introduced in §2.1, wikiHowToImprove is a well-established data set derived from wikiHow and consisting of more than 246,000 how-to guides. In general, each guide consists of multiple revisions of an *article*, a fixed goal that is named in the *title*,
and (optionally) an *indicator* that follows the title in parentheses (cf. Table 1). As we are interested in how-to guides for different target groups, we filter the data for indicators that specify a group of people as targets, which we also refer to as the *audience*. Table 2 lists the 20 most frequent indicators extracted from wikiHowToImprove.
Based on a manual grouping of these indicators, we find that 15 out of 20 indicators refer to attributes of performative gender and age (the remaining five are underlined in Table 2). Apart from their high frequency, both of these attributes are of interest to studies in the social sciences, in which they are often used as independent variables (Cortina et al., 2013; Cha and Weeden, 2014; Palència et al.,
2014). Following a traditional binary setup, we distinguish two audiences based on gender, women
(W) and men (M), and two audiences based on age, kids (K) and teens (T).3 For each type of audience, 3Note that while the selected audiences follow discrete
Rank Indicator # Rank Indicator #
1 Girls 370 11 Guys 35 2 for Girls 284 12 for Women 35
3 for Kids 182 13 Women 34 4 Kids 114 14 UK 34 5 Teens 110 15 for Men 31 6 Teen Girls 100 16 Christianity 31 7 for Teens 73 17 Men 29 8 USA 49 18 for Beginners 29
9 for Guys 42 19 Boys 25
10 Windows 38 20 Teenage Girls 25
Table 2: Counts of the 20 most frequent indicators.
Indicators 29 13 23 16 Articles 993 209 499 411
Sentences per article 40 50 29 43 Words per article 509 682 352 544
we create a set of all indicators used and collect all corresponding guides by extracting the latest article versions from wikiHowToImprove.
Statistics of our corpus with audience-specific how-to guides are provided in Table 3. We note that there is a much higher number of indicators and articles for W than for M. In comparison, the number of articles and indicators for K and T are similar. With only 2,112 how-to guides in total, the corpus seems relatively small. However, the average length of articles ranges from 352 to 682 words, which adds up to a corpus size of more than one million words. Throughout this work, we refer to this dataset as wikiHowAudiences.
4 Next, we approach it in its entirety with two case studies.
## 4 Case Study: Same Title, Different Audience
| W | M | K | T |
|-----|-----|-----|-----|
Our starting example from Table 1 includes two guides with the same title but different target indicators. Such guides outline the ultimate instances of instructions that are written for different audiences.
| Women - Men | | |
|---------------|----|-------------------------|
| BODY | 11 | Lose Belly Fat |
| INTERACT | 11 | Act on a Date |
| PRESENT | 13 | Dress Like a CEO |
| Kids - Teens | | |
| GROWN-UP | 3 | Look Older |
| ADVICE | 4 | Balance School and Life |
| ACTIVITY | 10 | Apply Makeup |
Therefore, we start our investigation by analyzing how often such cases occur in wikiHowAudiences, which topics they cover, and what differs between versions for specific target groups.
## 4.1 Guides Selection
First, we identify titles that occur more than once in wikiHowAudiences: 32 unique titles for W–M and 15 for K–T. Next, we group guides with the same title but different target audiences into pairs.
A complete list of article titles in this subset can be found in Appendix A.1.
## 4.2 Guides Analysis
To understand which goals require audiencespecific adaptations, we analyze the topics and articles of the filtered guides.
Topics. We start by manually investigating titles of the filtered pairs of guides. For this purpose, we assign each of them to one of three content-related categories. The categories were designed to cover all the titles while being as concrete as possible. An overview of all the categories and their examples is listed in Table 4.
We find that W–M instructions cover a relatively wide range of topics, from body-related activities (BODY), over interacting with other people (INTERACT), to self-presentation (PRESENT),
which is the most frequent category. In contrast, among titles in K–T, we notice one clear pattern:
all topics focus on issues that require different steps depending on the age of the target. Among them, we distinguish and report in ascending order of frequency articles about learning how to do activities for grown-ups or concerning the urge to grow old
(GROWN-UP), advice related to the life of young people (ADVICE), and activities about oneself or the relation of oneself to others (ACTIVITY).
Length. Next, we check whether there are significant differences in terms of how detailed the instructions are for different target groups. We quantify this by simply measuring the length per article in words and sentences. We notice a considerable difference between K and T: the median length of articles for K is only 30 sentences and 346 words, while articles for T contain 98 sentences and 1081 words. In the case of W and M, we do not find such large differences in terms of average word (785 vs. 856) and sentence counts (59 vs. 62).
Overall, the numbers reflect the patterns shown in Table 3 for the whole wikiHowAudiences data.
Content. Finally, we switch our attention to the actual content of the articles. As a simple measure of how similar two guides are, we consider their word overlap in both directions using BLEU score (Papineni et al., 2002).
Table 5 presents the articles with the lowest and highest word overlap in both analyzed groups. Interestingly in the case of W–M, both articles cover concepts related to BODY, namely clearing skin and recognizing an infection. Manual inspection of their content reveals that even in the case of the least overlapping articles, "Get Clear Skin", slight differences can be noticed: W article includes more specific information as well as different usage of punctuation. In the case of most overlapping articles, "Recognize Chlamydia Symptoms", the main difference comes from the vocabulary related to different body parts from body types. The high word overlap of these two versions is likely related to their introductions, which provide an interchangeable overview to the topic.
In the case of K–T, the least and most overlapping articles come from two different categories:
ACTIVITY and GROWN-UP. The least overlapping pair, "Flirt", is a case of two instructions that treat the same goal with different levels of complexity.
For example, the matter of eye contact is described with one step in K and more than ten in T. The most overlapping articles, "Make Money", can be an example of a content stalemate - for both target audiences, babysitting is the first suggested activity to achieve the profit goal. However, it is possible to notice differences in how this concept is contextualized for two groups: either in a list of activities or discussed with its implications and advantages.
WGently pat your face dry with a clean towel.
Don't rub your face! This can irritate your skin more.
Table 5: Excerpts from the article pairs with the lowest (left) and highest (right) word overlap.
| M | Dry your face - but not roughly. | Chlamydia, specifically chlamydia trachomatis, is a common and curable but dangerous sexually transmitted infection (...) |
|--------------|--------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|
| K–T | Flirt (0.05 BLEU) | Make Money (0.59 BLEU) |
| K | Make eye contact. Both girls and boys love | There are the traditional jobs like babysitting, |
| eye contact. | shoveling snow, and doing chores around the house. | |
| T | Make eye contact. Body language is a big part of flirting, and a big part of that is eye contact. Eye contact conveys intimacy (...) | Babysit for friends and family. One of the best ways for teenagers to make money and help out in the community is babysitting. |
## 4.3 Summary
We exemplified three characteristics that can distinguish guides written for different audiences. First, the instructions written for K–T significantly differed in *length*. Next, we saw pairs of guides that varied in *style* (such as punctuation) and *content*
(e.g., vocabulary in BODY articles). Some of the presented examples suggest that considering only simple content features could be enough to distinguish articles written for different audiences. However, such an approach could be insufficient in more complex cases, such as pairs of guides with high word overlap (see "Make Money"). We discuss these articles again in our generalization study (§6).
## 5 Case Study: "How To Be" Guides
In the previous section, we looked at how-to guides that occur in different versions for specific audiences. Such guides might concern particular goals that *require* being addressed in distinct ways. In this section, in contrast, we broaden the scope of analysis to explore other cases of differences in audience-specific instructions.
## 5.1 Guides Selection
The initial example from the introduction (see Table 1) explain how to perform like somebody the reader is presumably not. Inspired by this example, we investigate what other guides instruct their readers "how to be". Concretely, we filter titles starting with the word 'be', which gives us 118 guides for W, 20 for M, 32 for K, and 30 for T.
W–M Get Clear Skin (0.02 BLEU) Recognize Chlamydia Symptom (0.69 BLEU)
Chlamydia is a dangerous yet common and curable sexually transmitted infection (...)
Table 6: The most frequent target-specific completions of "how to be" guides and examples of respective titles.
## 5.2 Guides Analysis
| Completion(s) | Title | |
|-----------------|-------------------------------|-------------------------|
| W | Popular | Be Popular and Athletic |
| Cute | Be Cute at School | |
| M | Cool | Be Cool in High School |
| More | Be More Physically Attractive | |
| K | Good | Be Good With Money |
| T | Good | Be a Good Friend |
To understand which topics the "how to be" guides cover, we group them according to the first word that occurs after 'be' (henceforth the *completion*).5 Table 6 shows the most frequent completions for each target group and respective example titles.
Regarding K–T guides, we notice no clear pattern that would distinguish instructions based only on their titles. There is roughly the same number of how-to articles for K and T (32 vs. 30). Moreover, among the most frequent completions we commonly find the word 'good', followed by words such as 'comfortable', 'less', or 'safe'.
In contrast, we find substantial differences for W–
M. Specifically, we note that "how to be" guides are more common for W (12% of all articles for this target group) and for both audiences we find differing frequencies of completions: While W articles focus 5We ignore the articles 'a', 'an', and 'the'.
on being 'cute' and 'popular' (9 guides), M articles put more emphasis on being 'cool' and 'more' (6 guides). Even though all the how-to guides refer to similar contexts (mostly related to school), we do not find mutual correspondence—there are no instructions for how to "be cool at school" for W
and no guide for how to "be cute at school" for M.
## 5.3 Summary
In this section, we looked at a particular subset of wikiHowAudiences, namely guides with titles starting with the word 'be'. We found that, in the case of W–M targets, the differences in instructions occur already at the level of goals that these guides describe. In other words, we saw examples of instructions where the information for which audience they were intended could be deduced strictly from their *titles*.
## 6 Generalization Study: Computational Approach
Our case studies show that, depending on the audience, there exist examples of articles that differ in terms of topic, length, style, and/or vocabulary. However, an open question is whether these are only individual cases or if such differences occur systematically. In this study, we investigate this question computationally and attempt to verify our observations on the basis of a larger dataset. For this purpose, we implement tentative characteristics in the form of features and models (§6.1), evaluate in a setting with our full sub-corpora (§6.2), discuss quantitative results (§6.3), and analyze qualitative findings (§6.4).
## 6.1 Models
Based on the findings from the two case studies, we define majority and length-based baselines and several simple logistic regression classifiers with different sets of features.
Baselines. We use a simple majority baseline that always assigns the most frequent class. We also implement two length-based baseline models that use the number of words in a title (or article) as the only feature for classification.
Content (title/article). The words and phrases used in a text can be potential indicators of its target group. Thus, we make use of the most common6 6Note that we could have used all n-grams, but due to the small size of our data (see §6.2), we decided to limit the number of features via an additional hyperparameter.
| W | M | K | T | Total | |
|-------|-------|-----|-------|---------|-------|
| TRAIN | 805 | 172 | 416 | 337 | 1,730 |
| DEV | 94 | 23 | 45 | 37 | 199 |
| TEST | 94 | 14 | 38 | 37 | 183 |
| Total | 1,202 | 910 | 2,112 | | |
uni-grams and bi-grams, excluding stop words, as a feature representation for the content of a how-to guide. We evaluate two variants: features derived from the articles and from the titles.
Style (article). We represent style using two sets of established features from authorship attribution (Sari et al., 2018), namely *lexical* style: average word length, number of short words, vocabulary richness in terms of hapax-legomena and dislegomena, % of digits, % of upper case letters; and syntactical style: occurrences of punctuation, frequencies of POS tags, and stop-word frequencies.
combined **(article).** Content and style can potentially provide complementary information. We test whether a model can leverage a combination of information from different sources. For this purpose, we simply concatenate the article-level features for content, style, and length.
RoBERTa (article). As an alternative to manually selected features, we further test features derived from a large language model, RoBERTa (Liu et al., 2019). Specifically, we encode the article's text, truncated to the first 512 tokens, and extract the representation of the special classification token from the last hidden layer as a set of feature values.
## 6.2 Experimental Setup
In order to find out whether and to what extent articles for different target groups can be distinguished computationally, we define two classification tasks in which specific articles, based on their characteristics, are to be assigned to one target group each. We distinguish between articles for women and men (W–M) and between articles for kids and teenagers (K–T). For all four classes, we use the full wikiHowAudiences, which we divide into TRAIN, DEV, and TEST sets following the articlelevel partition of the original wikiHowToImprove corpus (Anthonio et al., 2020). Statistics for each
| Model | W–M | K–T |
|--------------------|-------|-------|
| Baselines | | |
| Majority baseline | 0.47 | 0.34 |
| Length (title) | 0.47 | 0.44 |
| Length (article) | 0.47 | 0.61 |
| Content & Style | | |
| Content (title) | 0.57 | 0.57 |
| Content (article) | 0.59 | 0.78 |
| Style (article) | 0.58 | 0.67 |
| "Full" models | | |
| combined (article) | 0.71 | 0.78 |
| RoBERTa (article) | 0.68 | 0.74 |
Table 8: Macro F1-scores on the test sets.
class and set are shown in Table 7. For the style features, the texts are lemmatized with spaCy.7 We train each model on the TRAIN set and evaluate in terms of macro F1-score on the TEST set. We compute F1-score per class as the harmonic mean between precision (ratio of correct predictions) and recall (ratio of correctly classified instances). As our data is imbalanced, we use macro F1 instead of a weighted/micro score to treat each class (rather than each instance) as equally important.
A number of hyperparameters are optimized on the DEV set: We try different values for the logistic regression classifiers' L1 and C terms, sampled from 10 instances between 1e − 5 and 100. For the content features, we optimize the number of k most common n-grams (k = 200). We also made use of the DEV set to determine the best language model for our tasks, which we found to be roberta-large (results of other models are shown in Appendix A.2).8
## 6.3 Results
The results are summarized in Table 8. As conjectured based on the K–T articles from the first case study, we find that the length-based baselines indeed outperform the majority baseline9in that setting. As the further results show, content and stylistic features can indeed be used to correctly assign a specified target group to many how-to guides. According to the evaluation scores, features calculated at the article level are particularly suitable for this purpose: The combined model, which uses content, style and length features on the article level, achieves the best result with macro-F1 scores of 0.71 and 0.78 for W–M and K–T, respectively. Features generated based on the roberta-large language model achieve competitive scores (0.68 and 0.74), but fall short of the combined model.
The large differences in result between the baselines and our models show that the target audience of many articles can be determined simply from the vocabulary and style of an article. Next, we take a closer look at model features and errors.
## 6.4 Analyses
For our analyses, we focus on the combined model because it achieves the best results and its features are easily interpretable.
Features. For each target group, we analyze what features are most important to the model. Since our model uses independent features in a binary classification task, we can simply check the highest positive and negative feature weights for this purpose. A selection from the ten most predictive features10 and example sentences are shown in Table 9. As the examples illustrate, some of the strongest features are, again, based on stereotypes
(e.g., 'cute', 'makeup' for W) or reflect heteronormative assumptions ('hers' for M). Interestingly, we also see characteristics of gender-inclusive language ('theirs' for M) and direct address of the reader in terms of their group membership ('kid' for K and 'teen' for T). We further find negations
(e.g., 'wasn't') as part of strong features for W,
which is particularly worrying in light of sociopsychological findings that have shown negations to serve a stereotype-maintaining function across languages (Beukeboom et al., 2010, 2020).
Same title articles. As examples of particularly hard cases, we return to the how-to guides from the first case study, which consisted of article versions for different audiences (§4). Following the data partition from previous work, we identify 16 such articles in the DEV and TEST splits. We find that the combined model classifies 12 of them correctly
(75%). In the remaining 4 cases, the prediction errors could have been caused by superficial features that are predictive for the opposite audience. We note for each of these 16 articles that the version 10Appendix A.2 lists all top-10 most predictive features.
| Feature(s) | Example | Title | | | | | | |
|------------------------------------------------|-----------------------------------------------|------------------------------------------|-----------------------------------|---------|----|-----|-------|-------|
| W | cute, makeup | Do cute makeup. | Look Cute | | | | | |
| wasn't | She most likely wasn't wearing the right | Go from Ugly to Popular | | | | | | |
| colors for her skin tone. | | | | | | | | |
| M | hers | Slowly move your hand towards hers . . . | Know if Your Crush Likes You Back | | | | | |
| theirs | Being | a | good | partner | is | all | about | Grind |
| . . . adjusting your style to suit theirs. | | | | | | | | |
| K | name | Think of your blog's name. | Write a Blog | | | | | |
| kid | . . . even if you're a kid, there are ways to | Make Money | | | | | | |
| bank a few extra bucks. | | | | | | | | |
| T | dress | Dress up, make it look important. | Know What to Wear on Dates | | | | | |
| teen | When you're a teen with a busy schedule, | Stay Active After School | | | | | | |
| it can be difficult to find time to be active. | | | | | | | | |
for the opposite audience is part of the TRAIN split.
Therefore, the topics of the guides are generally not specific to one audience, and a correct classification of the majority of cases demonstrates that the model indeed captures characteristics of content and style that seem specific to the audience itself.
## 7 Discussion And Conclusion
In this paper, we assessed differences across howto guides written for specific audiences. In the construction of sub-corpora for four target groups, we already noticed inequalities on the level of who is being instructed in wikiHow: as a target audience, women are mentioned more than four times more frequently than men, and teens receive about 50% more instructions per article than kids. In two case studies, we investigated and provided examples of target-related differences on the levels of topic, style, and content.
The differences observed in our case studies inspired feature sets of shallow classifiers for predicting the target audience of a given guide. Using these classifiers, we showed that it is, in many cases, indeed possible to automatically predict for which audience an article was written. In an analysis of our results, we found that this success is not merely based on different topics covered for each target group but that the articles for each group systematically differ in terms of content and style.
Each of the aforementioned observations presents a tiny, seemingly insignificant piece of a puzzle. But taken together, these pieces reveal a surprisingly clear picture: there are noticeable differences in what topics are covered for each target group, how many articles and instructions are provided for each audience, and how these articles are written. Even though the audience-specific characteristics used in our studies are by no means exhaustive, our straightforward approach allowed us to identify, qualitatively and quantitatively, debatable differences in how wikiHow guides present particular topics to specific target groups. While there is an inevitable need for differences in vocabulary when speaking about physical features or body parts, it is at best unclear in which ways how-to guides about human interactions or selfpresentation should cast significant differences.
Some of the observed differences have already been critically discussed in the context of social science research. For example, it is well-known that labels such as 'cute' are used pejoratively as a form of social control (Talbot, 2019) and that prescriptive components of gender stereotypes in education contribute to discrimination (Kollmayer et al., 2018). However, exposing readers to cultural messages and beliefs about age, gender or other factors cannot be avoided entirely, especially on a collaboratively edited online platform. In fact, it seems to be a challenge for any pluralistic society to find a balance between communicating traditional values and empowering everyone. It is therefore all the more important for a comprehensive understanding to determine when and in what form social norms are conveyed. As such, we view the contributions of this paper, namely our data set of audience-specific guides, wikiHowAudiences, and our mixed-methods approach for identifying and verifying differences, as a valuable connecting point to raise awareness of potential issues and to foster interdisciplinary dialogue for future research.
## Limitations
Our studies focus on the differences in how-to guides written for specific audiences only in one language, namely English. A major limitation is therefore that we do not consider other languages.
The perspectives provided by the data source we rely on, wikiHow, allow us to identify specific phenomena and peculiarities. Yet, contemplating only one data source lets us generalize only to a limited extent. For example, the audiences considered in this work depended on the target groups portrayed in the data. They are neither exhaustive nor representative of the diversity of humankind, especially of marginalized social groups. Therefore, a wider variety of data sources will be needed to test generalizations.
Finally, a further limitation of our studies concerns intersectionality. While it seems possible that guides can be tuned by contemplating one specific attribute of the audience at a time, this does not hold with regard to the actual attributes of the readers. Such attributes are per se coexistent, and consequently, they are not separable.
## Ethics Statement
We acknowledge that the content that emerged from the data is narrow in terms of cultural perspectives, mainly addressing western cultures. Moreover, the analysis of the audiences is not exhaustive of the diversity of humankind, especially not exhaustively accounting for queer identities in particular trans and non-binary identities. With the present research, we do not intend to reinforce representational biases, rather to highlight them.
## References
Sweta Agrawal and Marine Carpuat. 2019. Controlling text complexity in neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1549–
1564, Hong Kong, China. Association for Computational Linguistics.
Talita Anthonio, Irshad Bhat, and Michael Roth. 2020.
wikiHowToImprove: A resource and analyses on
edits in instructional texts. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5721–5729, Marseille, France. European Language Resources Association.
Talita Anthonio and Michael Roth. 2020. What can we learn from noun substitutions in revision histories? In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 1359–
1370, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Talita Anthonio and Michael Roth. 2021. Resolving implicit references in instructional texts. In *Proceedings* of the 2nd Workshop on Computational Approaches to Discourse, pages 58–71, Punta Cana, Dominican Republic and Online. Association for Computational Linguistics.
Talita Anthonio, Anna Sauer, and Michael Roth. 2022.
Clarifying implicit and underspecified phrases in instructional text. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 3319–3330, Marseille, France. European Language Resources Association.
Camiel J. Beukeboom, Christian Burgers, Zsolt P. Szabó, Slavica Cvejic, Jan-Erik M. Lönnqvist, and Kasper Welbers. 2020. The negation bias in stereotype maintenance: A replication in five languages. *Journal of Language and Social Psychology*,
39(2):219–236.
Camiel J. Beukeboom, Catrin Finkenauer, and Daniël H. J. Wigboldus. 2010. The negation bias: When negations signal stereotypic expectancies. *Journal of* Personality and Social Psychology, 99(6):978–992.
Irshad Bhat, Talita Anthonio, and Michael Roth. 2020.
Towards modeling revision requirements in wikiHow instructions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8407–8414, Online. Association for Computational Linguistics.
S.R.K. Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In *Proceedings of* the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP,
pages 82–90, Suntec, Singapore. Association for Computational Linguistics.
Pengshan Cai, Mo Yu, Fei Liu, and Hong Yu. 2022.
Generating coherent narratives with subtopic planning to answer how-to questions. In Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 26–42, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Ewa S. Callahan and Susan C. Herring. 2011. Cultural Bias in Wikipedia Content on Famous Persons. Journal of the American society for information science and technology, 62(10):1899–1915.
Carmen Cervone, Martha Augoustinos, and Anne Maass. 2021. The language of derogation and hate:
Functions, consequences, and reappropriation. *Journal of language and social psychology*, 40(1):80–
101.
Youngjoo Cha and Kim A Weeden. 2014. Overwork and the slow convergence in the gender gap in wages.
American Sociological Review, 79(3):457–484.
Cuong Xuan Chu, Niket Tandon, and Gerhard Weikum.
2017. Distilling task knowledge from how-to communities. In *Proceedings of the 26th International* Conference on World Wide Web, pages 805–814.
Lilia M Cortina, Dana Kabat-Farr, Emily A Leskinen, Marisela Huerta, and Vicki J Magley. 2013. Selective incivility as modern discrimination in organizations: Evidence and impact. *Journal of management*,
39(6):1579–1605.
Alok Debnath and Michael Roth. 2021. A computational analysis of vagueness in revisions of instructional texts. In *Proceedings of the 16th Conference* of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 30–35, Online. Association for Computational Linguistics.
Elizabeth N. Dougherty, Andrea B. Goldschmidt, Nicole K. Johnson, Krystal Badillo, Scott G. Engel, and Alissa A. Haedt-Matt. 2022. Gender differences in the relation between interpersonal stress and momentary shape and weight concerns in youth with overweight/obesity. *Body Image*, 40:249–255.
Noemie Elhadad and Komal Sutaria. 2007. Mining a lexicon of technical terms and lay equivalents. In Biological, translational, and clinical language processing, pages 49–56, Prague, Czech Republic. Association for Computational Linguistics.
Agnieszka Falenska and Özlem Çetinoglu. 2021. ˘ Assessing gender bias in Wikipedia: Inequalities in article titles. In *Proceedings of the 3rd Workshop* on Gender Bias in Natural Language Processing, pages 75–85, Online. Association for Computational Linguistics.
Danilo Fum, Giovanni Guida, and Carlo Tasso. 1986.
Tailoring importance evaluation to reader's goals: A
contribution to descriptive text summarization. In Coling 1986 Volume 1: The 11th International Conference on Computational Linguistics.
Xingwei He and Siu Ming Yiu. 2022. Controllable dictionary example generation: Generating example sentences for specific targeted audiences. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 610–627, Dublin, Ireland. Association for Computational Linguistics.
Tomoyuki Kajiwara, Hiroshi Matsumoto, and Kazuhide Yamamoto. 2013. Selecting proper lexical paraphrase for children. In Proceedings of the 25th Conference on Computational Linguistics and Speech Processing (ROCLING 2013), pages 59–73, Kaohsiung, Taiwan. The Association for Computational Linguistics and Chinese Language Processing
(ACLCLP).
Marlene Kollmayer, Barbara Schober, and Christiane Spiel. 2018. Gender stereotypes in education: Development, consequences, and interventions. *European* Journal of Developmental Psychology, 15(4):361–
377.
Piotr Konieczny and Maximilian Klein. 2018. Gender gap through time and space: A journey through Wikipedia biographies via the Wikidata Human Gender Indicator. *New Media Soc.*, 20(12).
Mahnaz Koupaee and William Yang Wang. 2018.
Wikihow: A large scale text summarization dataset.
CoRR, abs/1810.09305.
Shyong (Tony) K. Lam, Anuradha Uduwage, Zhenhua Dong, Shilad Sen, David R. Musicant, Loren Terveen, and John Riedl. 2011. WP:Clubhouse? an exploration of Wikipedia's gender imbalance. In *Proceedings of the 7th International Symposium on Wikis* and Open Collaboration, WikiSym '11, page 1–10, New York, NY, USA. Association for Computing Machinery.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach.
Laia Palència, Davide Malmusi, Deborah De Moortel, Lucía Artazcoz, Mona Backhans, Christophe Vanroelen, and Carme Borrell. 2014. The influence of gender equality policies on gender inequalities in health in europe. *Social science & medicine*, 117:25–33.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Deborah A Prentice and Erica Carranza. 2002. What women and men should be, shouldn't be, are allowed to be, and don't have to be: The contents of prescriptive gender stereotypes. *Psychology of women* quarterly, 26(4):269–281.
Joseph Reagle and Lauren Rhue. 2011. Gender bias in Wikipedia and Britannica. *International Journal of* Communication, 5:21.
Michael Roth and Talita Anthonio. 2021. UnImplicit shared task report: Detecting clarification requirements in instructional text. In Proceedings of the 1st Workshop on Understanding Implicit and Underspecified Language, pages 28–32, Online. Association for Computational Linguistics.
Yunita Sari, Mark Stevenson, and Andreas Vlachos.
2018. Topic or style? exploring the most useful features for authorship attribution. In Proceedings of the 27th International Conference on Computational Linguistics, pages 343–353, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Carolina Scarton and Lucia Specia. 2018. Learning simplifications for specific target audiences. In *Proceedings of the 56th Annual Meeting of the Association for* Computational Linguistics (Volume 2: Short Papers),
pages 712–718, Melbourne, Australia. Association for Computational Linguistics.
Katja Geertruida Schmahl, Tom Julian Viering, Stavros Makrodimitris, Arman Naseri Jahfari, and Marco Tax, David andj Loog. 2020. Is Wikipedia succeeding in reducing gender bias? assessing changes in gender bias in Wikipedia using word embeddings. In *Proceedings of the Fourth Workshop on Natural* Language Processing and Computational Social Science, pages 94–103, Online. Association for Computational Linguistics.
Yasuko Senda and Yaushi Shinohara. 2002. Analysis of titles and readers for title generation centered on the readers. In COLING 2002: The 19th International Conference on Computational Linguistics.
Yasuko Senda, Yasusi Sinohara, and Manabu Okumura.
2004. A support system for revising titles to stimulate the lay reader's interest in technical achievements. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 155–
161, Geneva, Switzerland. COLING.
Yasuko Senda, Yasusi Sinohara, and Manabu Okumura.
2006. Automatic terminology intelligibility estimation for readership-oriented technical writing. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06),
Genoa, Italy. European Language Resources Association (ELRA).
Kim Cheng Sheang and Horacio Saggion. 2021. Controllable sentence simplification with a unified textto-text transfer transformer. In *Proceedings of the* 14th International Conference on Natural Language Generation, pages 341–352, Aberdeen, Scotland, UK.
Association for Computational Linguistics.
Advaith Siddharthan and Napoleon Katsos. 2010. Reformulating discourse connectives for non-expert readers. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1002–1010, Los Angeles, California. Association for Computational Linguistics.
Janet K Swim, Robyn Mallett, and Charles Stangor.
2004. Understanding subtle sexism: Detection and use of sexist language. *Sex roles*, 51(3):117–128.
Mary Talbot. 2019. *Language and gender*. John Wiley
& Sons.
Claudia Wagner, David Garcia, Mohsen Jadidi, and Markus Strohmaier. 2015. It's a man's Wikipedia?
assessing gender inequality in an online encyclopedia. In *Proceedings of the international AAAI conference* on web and social media, volume 9, pages 454–463.
Claudia Wagner, Eduardo Graells-Garrido, and David García. 2016. Women through the glass ceiling: gender asymmetries in Wikipedia. *EPJ Data Science*,
5:1–24.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Li Zhang, Qing Lyu, and Chris Callison-Burch. 2020.
Reasoning about goals, steps, and temporal ordering with WikiHow. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 4630–4639, Online. Association for Computational Linguistics.
## A Appendix A.1 Case Study
| Category | Word | | | |
|-----------------|-----------------|-----------------------------------|-------------------------|-------------------|
| Overlap | Same Title | Indicator for W | Indicator for M | |
| BODY | 0.02 | Get Clear Skin | for Middle School Girls | Guys |
| BODY | 0.05 | Burn Fat | for Girls | for Men |
| PRESENT | 0.07 | Get Ready for School | for Girls | Guys |
| PRESENT | 0.09 | Look Rich Without Being Rich | Teen Girls | for Guys |
| PRESENT | 0.16 | Get Ready for School | Teen Girls | Guys |
| INTERACT | 0.16 | Catch Your Crush's Eye | for Girls Only | Boys |
| INTERACT | 0.16 | Dance at a School Dance | for Girls | for Guys |
| PRESENT | 0.21 | Act Like a Kid Again | Girls | Boys |
| PRESENT | 0.21 | Look Like an Abercrombie Model | for Girls | Boys |
| PRESENT | 0.21 | Dress Emo | for Girls | Guys |
| BODY | 0.24 | Have Good Hygiene | Girls | Boys |
| PRESENT | 0.25 | Prepare for a School Dance | for Girls | for Guys |
| PRESENT | 0.27 | Pack for Soccer Practice | Girls | Boys |
| INTERACT | 0.28 | Be in a Female Led Relationship | Women | Men |
| INTERACT | 0.30 | Act on a Date | for Girls | for Boys |
| PRESENT | 0.31 | Dress Cool | for Girls | Guys |
| BODY | 0.31 | Lose Belly Fat | Teen Girls | for Men |
| PRESENT | 0.33 | Look Hot on Club Penguin | Girls | Guys |
| PRESENT | 0.35 | Be Awesome | for Girls | for Boys |
| INTERACT | 0.36 | Have Fun with Your Friends | Teen Girls | Guys |
| PRESENT | 0.36 | Dress Like a CEO | Women | Men |
| INTERACT | 0.37 | Cradle a Lacrosse Stick | Girls | Men |
| INTERACT | 0.41 | Get Your Crush to Like You | Girls | Guys |
| INTERACT | 0.43 | Practice Changing Room Etiquette | Girls | Men |
| INTERACT | 0.43 | Practice Changing Room Etiquette | Women | Men |
| BODY | 0.44 | Recognize Trichomoniasis Symptoms | Women | Men |
| PRESENT | 0.45 | Be Popular in Middle School | for Girls | for Boys |
| BODY | 0.47 | Lose Belly Fat | for Women | for Men |
| BODY | 0.49 | Gain Weight Fast | for Women | for Men |
| BODY | 0.52 | Be Indie | for Girls | for Guys |
| INTERACT | 0.54 | Grind | for Girls | for Guys |
| INTERACT | 0.55 | Host a Sleepover | Teen Girls | for Boys |
| BODY | 0.57 | Treat Acne | Teenage Girls | Teen Boys |
| BODY | 0.60 | Prevent HIV Infection | Women | Men |
| BODY | 0.69 | Recognize Chlamydia Symptoms | for Women | for Men |
| Indicator for K | Indicator for T | | | |
| ACTIVITY | 0.05 | Flirt | Middle School | for Teens |
| ACTIVITY | 0.10 | Redo Your Bedroom | Preteen Girls | Teen Girls |
| GROWN-UP | 0.11 | Look Older | Preteen Girls | Teenage Girls |
| ACTIVITY | 0.14 | Enjoy Summer Vacation | for Kids | for Teens |
| ACTIVITY | 0.18 | Clean Your Room | Kids | Teens |
| ADVICE | 0.19 | Enjoy a Plane Ride | for Grade School Kids | Teen Girls |
| ADVICE | 0.20 | Be Less Insecure | Preteens | for Teen Girls |
| ACTIVITY | 0.28 | Clean Your Room | Tween Girls | Teens |
| ACTIVITY | 0.29 | Pack for a Vacation | Preteen Girls | Teen Girls |
| ADVICE | 0.30 | Get a Boy to Like You | Pre Teens | Teens |
| ACTIVITY | 0.31 | Apply Makeup | Preteens | for Teen Girls |
| ADVICE | 0.33 | Balance School and Life | Middle School | Teens |
| ACTIVITY | 0.36 | Host a Girls Only Sleepover | for Preteens | Teens |
| ACTIVITY | 0.39 | Get Ready for Bed | Tween Girls | for Teenage Girls |
| GROWN-UP | 0.46 | Get Fit | for Kids | Teenage Girls |
| ACTIVITY | 0.53 | Apply Makeup | Preteens | for Teens |
| GROWN-UP | 0.59 | Make Money | for Kids | for Teenagers |
Table 10: All "Same Title, Different Audience" guides.
| Be (. . . ) | X | (. . . ) | indicator |
|---------------|---------|-----------------------|--------------|
| Be | Popular | and Athletic | (for Girls) |
| Be | Popular | in Grade 6. | (for Girls.) |
| Be | Popular | in Middle School | (for Girls) |
| Be | Popular | in a School Uniform | (Girls) |
| Be | Popular | in Secondary School | (for Girls) |
| Be a | Cute | Teen | (Girl) |
| Be | Cute | (Tween Girls) | |
| Be the | Cute | and Hot Teen | (Girls) |
| Be | Cute | at School | (Girls) |
| Be | Cool | Around Your Crush | (for Boys) |
| Be | Cool | in High School | (Boys) |
| Be a | Cool | Christian | (Teen Guys) |
| Be | More | Attractive to Girls | (for Boys) |
| Be | More | Physically Attractive | (Men) |
| Be | More | Socially Open | (Men) |
| Be a | Good | Hamster Owner | (for Kids) |
| Be a | Good | Stuffed Animal Mom | (for Kids) |
| Be | Good | With Money | (for Kids) |
| Be a | Good | Friend | (Teens) |
| Be a | Good | Writer | (Teens) |
Table 11: The most common completions in the titles for "how to be".
## A.2 Classification Tasks
| model-name | W–M | K–T |
|--------------------|-------|-------|
| bert-base-uncased | 0.57 | 0.64 |
| roberta-base | 0.81 | 0.73 |
| bert-large-uncased | 0.73 | 0.74 |
| roberta-large | 0.82 | 0.75 |
Table 12: The performance on the DEV set of the classification tasks with optimized LR using the [CLS] token representations from the different LMs.
Most predictive features of the combined model:
W: hadn't - wasn't - cute - makeup - ourselves -
bag - skirt - outfit - move - sleep M: man - product - boy - yourselves - o - dance -
theirs - shoe - hers - person K: kid - the - adult - name - are - step - were -
else - probably - mean T: teen - without - than - dress - next - her - want
- buy - everyone - ADJ
## A.3 Confusion Matrices
DEV TEST
W M W M
W 0.83 0.17 0.87 0.13
M 0.48 0.52 0.36 0.64 W 78 16 82 12
M 11 12 5 9
Table 13: The confusion matrix for the dev set (left) and the confusion matrix for the test set (right).
Table 14: The confusion matrix for the dev set (left) and the confusion matrix for the test set (right).
| DEV | TEST | | | |
|-------|--------|------|------|------|
| K | T | K | T | |
| K | 0.78 | 0.22 | 0.87 | 0.13 |
| T | 0.35 | 0.65 | 0.30 | 0.70 |
| K | T | K | T | |
| K | 35 | 10 | 33 | 5 |
| T | 13 | 24 | 11 | 26 |
|
kader-etal-2023-words | {``}When Words Fail, Emojis Prevail{''}: A Novel Architecture for Generating Sarcastic Sentences With Emoji Using Valence Reversal and Semantic Incongruity | https://aclanthology.org/2023.acl-srw.47 | Sarcasm is a form of figurative language that serves as a humorous tool for mockery and ridicule. We present a novel architecture for sarcasm generation with emoji from a non-sarcastic input sentence in English. We divide the generation task into two sub tasks: one for generating textual sarcasm and another for collecting emojis associated with those sarcastic sentences. Two key elements of sarcasm are incorporated into the textual sarcasm generation task: valence reversal and semantic incongruity with context, where the context may involve shared commonsense or general knowledge between the speaker and their audience. The majority of existing sarcasm generation works have focused on this textual form. However, in the real world, when written texts fall short of effectively capturing the emotional cues of spoken and face-to-face communication, people often opt for emojis to accurately express their emotions. Due to the wide range of applications of emojis, incorporating appropriate emojis to generate textual sarcastic sentences helps advance sarcasm generation. We conclude our study by evaluating the generated sarcastic sentences using human judgement. All the codes and data used in this study has been made publicly available. |
## "When Words Fail, Emojis Prevail": Generating Sarcastic Utterances With Emoji Using Valence Reversal And Semantic Incongruity
Faria Binte Kader∗, Nafisa Hossain Nujat∗**, Tasmia Binte Sogir**∗,
Mohsinul Kabir, Hasan Mahmud, Kamrul Hasan Department of Computer Science and Engineering Islamic University of Technology Dhaka, Bangladesh
{faria, nafisa13, tasmia, mohsinulkabir, hasan, hasank}@iut-dhaka.edu
## Abstract
Sarcasm is a form of figurative language that serves as a humorous tool for mockery and ridicule. We present a novel architecture for sarcasm generation with emoji from a nonsarcastic input sentence in English. We divide the generation task into two sub tasks: one for generating textual sarcasm and another for collecting emojis associated with those sarcastic sentences. Two key elements of sarcasm are incorporated into the textual sarcasm generation task: valence reversal and semantic incongruity with context, where the context may involve shared commonsense or general knowledge between the speaker and their audience. The majority of existing sarcasm generation works have focused on this textual form. However, in the real world, when written texts fall short of effectively capturing the emotional cues of spoken and face-to-face communication, people often opt for emojis to accurately express their emotions. Due to the wide range of applications of emojis, incorporating appropriate emojis to generate textual sarcastic sentences helps advance sarcasm generation. We conclude our study by evaluating the generated sarcastic sentences using human judgement. All the codes and data used in this study has been made publicly available1.
## 1 Introduction
Sarcasm is defined as the use of remarks that often mean the opposite of what is said in order to hurt someone's feelings or to criticize something in a humorous way2. Sarcastic remarks are often challenging to interpret considering their literal meaning differs greatly from the speaker's actual intent.
Compared to verbal or in-person conversations, textual sarcasm presents additional challenges due to the absence of visual cues, vocal tone etc.
| Non-Sarcastic Input | Sarcastic Output with Emoji | | | | |
|----------------------------------------------|-------------------------------|-------|-----|----------|--------------------------|
| I really hate walking | I really love the outdoors walking in the rain. I sat feeling | | | | |
| in the rain. | thoroughly miserable. | | | | |
| Mom | is | in | a | bad | Happy mothers day mom is |
| mood today. | in a well mood today. | She | | | |
| sounded tense and angry. | | | | | |
| That movie was bad. | That | movie | was | awesome. | |
| Bad intelligence and political incompetence. | | | | | |
Table 1: Sample sarcastic outputs with emoji generated from non-sarcastic inputs The presence of sarcasm makes it significantly harder for machines to understand the actual meaning of the textual data. This has motivated research in detecting sarcasm in textual data. In order to train machines to detect sarcasm, we need quality datasets that represent different aspects of sarcasm in text. Even though we have an abundance of social media data and resources, it can be difficult to collect correctly labeled sarcastic texts. Instead, many research have tried to generate texts that can accurately express sarcastic notions (Joshi et al.,
2015; Mishra et al., 2019; Chakrabarty et al., 2020).
Many studies have also investigated strategies in incorporating sarcasm generation into chatbots (Joshi et al., 2015, 2017).
Emojis, small ideograms that represent objects, people, and scenes (Cappallo et al., 2015), are one of the key elements of a novel form of communication due to the advent of social media. Using emojis within texts can give us additional cues on sarcasm, replicating facial expressions and body language, etc. Incorporating emojis with texts for training will let the machines catch these cues easily (Bharti et al., 2016). Subramanian et al. (2019)
observed that when emojis were included in the sentence, their emoji-based sarcasm detection model performed noticeably better.
In this study, we propose a new framework in which when given a non-sarcastic text as input, the text is converted into a sarcastic one with emoji where the emoji will specifically help to identify the sarcastic intent of the text. Table 1 shows a few sample non-sarcastic input and sarcastic output pairs with emoji. In order to implement the architecture, we have focused on two major components: Sarcastic text generation and Emoji prediction for the text.
For textual sarcasm generation, we are incorporating the works of Chakrabarty et al. (2020) and Mishra et al. (2019) and for Emoji prediction, a deep learning model fine tuned on OpenAI's CLIP
(Contrastive Language-Image Pre-training)3(Radford et al., 2021) is used. The emoji prediction module along with the sarcasm generation module generates the final sarcastic text including emoji.
This work provides two major contributions:
1. Propose a novel multi-modular framework for sarcasm generation incorporating the reversal of valence and semantic incongruity characteristics of sarcasm while also including appropriate emojis.
2. Create and publish a sarcastic corpora which can serve as valuable training data for sarcasm detection models.
As far as our understanding goes, there has been no previous framework proposed on textual sarcasm generation that also incorporates emojis. This framework can aid downstream tasks by allowing a deeper understanding of sarcasm to produce more contextually relevant responses.
## 2 Related Work
Research on sarcasm have been a subject of interest for several decades. The following sub sections provide a brief overview of the past work done on different aspects of sarcasm.
## 2.1 Studies On Sarcasm Detection
Sarcasm detection is a classification task in its most typical form. From a given text, the task includes classifying the text as sarcastic or non-sarcastic.
Sarcasm detection is a fairly recent but promising research field in the domain of Natural Language 3https://openai.com/research/clip Processing. Nonetheless, it serves as a crucial part to sentiment analysis (Maynard and Greenwood, 2014).
Most of these studies on sarcasm detection train and test on already available popular datasets such as the datasets used by Riloff et al. (2013), Khodak et al. (2017) and Cai et al. (2019). We observed that Twitter is predominantly the most popular social media platform used for sarcasm detection datasets although Reddit, Amazon and a few discussion forums were also seen being used.
We also saw a shift in Sarcasm detection methodologies from rule-based approaches (Riloff et al.,
2013; Bharti et al., 2015), machine learning and deep learning approaches (Bharti et al., 2017; Poria et al., 2016; Ghosh and Veale, 2016) to transformed based approaches (Dadu and Pant, 2020; Kumar et al., 2021). We include two tables Table 9 and Table 10 summarizing the datasets and methodologies used in sarcasm detection in the appendix (Section A).
Recent works on sarcasm detection include frequent use of BERT (Savini and Caragea, 2022; Zhang et al., 2023; Pandey and Singh, 2023), multimodal and cross-modal detection tasks (Liang et al., 2022; Chauhan et al., 2022; Ding et al.,
2022), enhancement of sarcasm detection in complex expressions with sememe knowledge (Wen et al., 2022), study on the effect of foreign accent
(Puhacheuskaya and Järvikivi, 2022), use of vocal and facial cues (Aguert, 2022) etc. Sarcasm and irony detection from languages other than English i.e. Chinese, Dutch, Spanish, Arabic, Romanian etc. have also been studied in recent works (Farha and Magdy, 2020; Muaad et al., 2022; Maladry et al., 2022; Wen et al., 2022; Ortega-Bueno et al.,
2022; Buzea et al., 2022).
## 2.2 Characteristics Of Sarcasm
Studies have identified a variety of potential sources for sarcasm. According to Gerrig and Goldvarg (2000), sarcasm stems from a situational disparity between what the speaker desires, believes, or expects and what actually happens. Incongruity between text and a contextual information is mentioned as a factor by Wilson (2006). Context Incongruity (Campbell and Katz, 2012) is addressed in the works of Riloff et al. (2013) who suggests that sarcasm arises from a contrast between positive verbs and negative situation phrases. Burgers et al. (2012) formulates that for an utterance to be
![2_image_0.png](2_image_0.png)
1. the sentence has to be evaluative, 2. it should be based on the reversal of valence of the literal and intended meanings, 3. it should have a semantic incongruity with the context, which may consist of common sense or general information that the speaker and the addressee share, 4. should be aimed at some target, 5. should be in some manner relevant to the communication scenario. Many studies focused on one or more of these characteristics.
## 2.3 Sarcasm Generation
Compared to sarcasm detection, research on sarcasm generation is still in its early stages. Joshi et al. (2015) introduced SarcasmBot 4 , a chatbot that caters to user input with sarcastic responses. SarcasmBot is a sarcasm generation module with eight rule-based sarcasm generators where each of the generators produces a different type of sarcastic expression. During the execution phase, one of these generators is selected based on user input properties. Essentially, it yields sarcastic responses rather than converting a literal input text into a sarcastic one, the latter one being a common practice in future research. This method was later utilized in the author's subsequent work (Joshi et al., 2017)
where they built SarcasmSuite, a web-based interface for sarcasm detection and generation.
The first work on automatic sarcasm generation conditioned from literal input was performed by 4https://github.com/adityajo/sarcasmbot/
Mishra et al. ( 2019 ). The authors relied on the Context Incongruity characteristic of sarcasm mentioned by Riloff et al. ( 2013 ) and employed information retrieval-based techniques and reinforced neural seq2seq learning to generate sarcasm. They used unlabeled non-sarcastic and sarcastic opinions to train their models, where sarcasm was formed as a result of a disparity between a situation's positive sentiment context and negative situational context. A thorough evaluation of the proposed system's performance against popular unsupervised statistical, neural, and style transfer techniques showed that it significantly outperformed the baselines taken into account.
Chakrabarty et al. (2020) introduced a new framework by incorporating context in the forms of shared commonsense or world knowledge to model semantic incongruity. They based their research on the factors addressed by Burgers et al. (2012).
Their architecture is structured into three modules:
Reversal of Valence, Retrieval of Commonsense Context, and Ranking of Semantic Incongruity.
With this framework they were able to simulate two fundamental features of sarcasm: reversal of valence and semantic incongruity with the context.
However, they opted for a rule-based system to reverse the sentiments. The authors also noticed that in a few cases, the simple reversal of valence strategy was enough to generate sarcasm which meant the addition of context was redundant.
Recent similar works in the field include that of Oprea et al. ( 2021 ) where they developed a sarcastic response generator, Chandler, that also provides explanations as to why they are sarcastic. Das et al. ( 2022 ) manually extracted the features of a benchmark pop culture sarcasm corpus and built padding sequences from the vector representations' matrices. They proposed a hybrid of four Parallel LSTM Networks, each with its own activation classifier which achieved 98.31% accuracy among the test cases on open-source English literature. A
new problem of cross-modal sarcasm generation
(CMSG) that creates sarcastic descriptions of a given image was introduced by Ruan et al. (2022).
However, these studies have only focused on generating textual sarcastic sentences, but as described by Subramanian et al. (2019), incorporating emojis improved the overall performance of sarcasm detection and thus can be a potential research scope.
## 3 Methodology
Our model architecture consists of 3 modules which are as follows: Reversal of Valence, Retrieval of Commonsense and Emoji Prediction. The Reversal of Valence module takes in a negative utterance and generates an utterance with positive sentiment. The Retrieval of Commonsense module outputs relevant commonsense context sentence which helps in creating a sarcastic situation. Lastly, the Emoji Prediction module generates an emoji which makes the overall output more sarcastic.
With these three modules, we have incorporated two of the fundamental features of sarcasm: reversal of valence and semantic incongruity with the context. A diagram of the overall pipeline is demonstrated in Figure 1. We describe the modules in details in the next few sub sections.
## 3.1 Reversal Of Valence
In the work of Chakrabarty et al. (2020), for the reversal of valence module, they have used a rulebased approach to manually reverse the sentiment of the negative sentence. But a rule-based model cannot reverse sentences that do not follow the traditional structure of sentences such as those used in social media. We have worked on this limitation of this current state-of-the-art sarcasm generation model where we replace their rule-based reversal module with a deep-learning reversal module inspired by the work of Mishra et al. (2019). This module is divided into two parts: Sentiment Neutralization and Positive Sentiment Induction.
## 3.1.1 Sentiment Neutralization
We implement the Sentiment Neutralization module to filter out the sentiment words from the input utterance, which results into a neutral sentence from a negative one. An example is shown in table 2.
Negative Input Neutral Output Is feeling absolutely bloated and fat from lack of a proper workout
$\text{Intely}$ 3.
Is feeling absolutely and from a proper workout Table 2: Example of sentiment neutralization from input sentence The neutralization model is essentially a sentiment classification model which first detects the sentiment of the given utterance (positive/negative).
This model consists of several LSTM layers and a self-attention layer. During testing, the selfattention vector is extracted as done by Xu et al.
(2018) which is then inversed and discretized as follows:
$${\hat{a}}_{i}={\begin{cases}0,&{\mathrm{if}}\ a_{i}>0.95*m a x(a)\\ 1,&{\mathrm{otherwise}}\end{cases}}\qquad(1)$$
where aiis the attention weight for the i
![3_image_0.png](3_image_0.png)
th word, and max(a) gives the highest attention value from the current utterance. A word is filtered out if the discretized attention weight for that word is 0. The sentiment detection model architecture is shown in figure 2.
## 3.1.2 Positive Sentiment Induction
The output from the Sentiment Neutralization module is fed to the Positive Induction module as input.
The module takes in a neutral utterance and incorporates positive sentiment into the utterance and returns a sentence with positive sentiment. An example is shown in table 3. For this, we use Neural Machine Translation method built on OpenNMT
framework (Klein et al., 2017) where we first train our model with a set of *< source, target >* pairs where the source is a neutral sentence and target is its positive counter part. We use the Positive dataset provided by Mishra et al. (2019) which includes a set of positive sentences. We pass this dataset through the sentiment neutralization module to get the neutral source sentence to its positive target sentence and use these *< source, target >* pairs to train the positive induction module. The input sentences are transformed into embeddings that go through the translation encoders and decoders. The encoders and decoders are both built with LSTM
layers.
| Neutral Input | Positive Output | | | |
|---------------------------|-------------------|---------|------------|-----|
| Is feeling absolutely and | Is | feeling | absolutely | |
| from a proper workout | amazing | and | high | got |
| away | from | a | proper | |
| workout | | | | |
Table 3: Example of positive sentiment induction from neutralized sentence
## 3.2 Retrieval Of Commonsense
This module is used to retrieve additional context for the sarcastic sentence based on commonsense knowledge. Figure 3 demonstrates a schematic view of this module. We discuss the detailed process in the following sections. Additionally, we show an example input-output pair for this module in table 4.
Input **Commonsense Sentence** His presentation was bad The manager is criticized
by his boss after a presentation
Table 4: Example of commonsense sentence generation from input sentence
## 3.2.1 **Generation Of Commonsense Knowledge**
For generating commonsense knowledge context, COMETDIS
TIL (West et al., 2021) is used.
First, we feed the input sentence to COMETDIS
TIL.
COMETDIS
TIL is a machine trained 1.5B parameters commonsense model generated by applying knowledge distillation (Hinton et al., 2015) on a general language model, GPT-3. It offers 23 commonsense relation types. For our study, we use the **xEffect** relation. From the three variants of COMETDIS
TIL (COMETDIS
TIL, COMETDIS
TIL + criticlow and COMETDIS
TIL + critichigh), we have chosen COMETDIS
TIL + critichigh for our work. The model
![4_image_0.png](4_image_0.png)
returns a contextual phrase pertaining to the **xEffect** relation with the extracted words of the nonsarcastic sentence. For a non-sarcastic sentence
"His presentation was bad", COMETDIS
TIL predicts the contextual phrase with **xEffect** relation - 'is criticized by his boss'. 3.2.2 Retrieval of Relevant Sentences Once we have the inferred contextual phrase, we retrieve relevant sentences. For doing so, we imply 2 methods - 1. Retrieval from corpus and 2.
Generation from the inferred phrase.
- **Retrieval from corpus:** First, from the contextual phrase, we extract the keyword. Then using the keyword, we search for related sentences in a corpus. We use Sentencedict.com5 as the retrieval corpus. For filtering the retrieved sentences, two constraints are set - (a)
the commonsense concept should appear at the beginning or at the end of the retrieved sentences; (b) to maintain consistency between the length of the non-sarcastic input and its sarcastic variant, sentence length should be less than twice the number of tokens in the non-sarcastic input. Next, we check the consistency of the pronoun in the retrieved sentence and the pronoun in the input sentence.
If the pronoun does not match, we modify it to match the non-sarcastic text input. If the non-sarcastic input lacks a pronoun while the retrieved sentence does not, it is simply changed to "I". These constraints for retrieving the sentences and the assessment of grammatical consistency are done following the 5https://sentencedict.com/
- **Generation from the inferred phrase:** Unlike the previous method, we keep the inferred phrase intact in this case. We first extract the Subject of the non-sarcastic input. If the sentence contains no *Subject*, we set it to 'I'. Then the auxiliary verb in the inferred context is checked and modified to match with that of the *Subject*. Then we feed the *Subject* and contextual phrase to a pre-trained sentence generation model6. The model fine-tunes Google's T5 on CommonGen (Lin et al., 2019). The model returns us a commonsense sentence based on the *Subject* and contextual inference.
For example - the *Subject-inference* pair for the input "His presentation was bad" becomes
['His', 'is criticized by his boss'], and from this collection of words, the sentence "The manager is criticized by his boss after a presentation." is generated.
## 3.2.3 **Selection Based On Semantic Incongruity**
The module in section 3.2.2 returns several sentences containing the context. Among them, we choose the sentence having the highest semantic incongruity with the sentence generated after the Reversal of Valence module. For calculating the semantic incongruity, following Chakrabarty et al.
(2020), we have used the RoBERTa-large (Liu et al.,
2019) model fine-tuned on the Multi-Genre NLI
dataset (Williams et al., 2017). Considering the non-sarcastic input "His presentation was bad", the Retrieval of Relevant Sentences module yields a list of sentences such as - "The manager is criticized by his boss after a presentation", "He openly criticized the plan as impracticable", and "My boss criticized my sloppy personal appearance". From these sentences, the highest ranked sentence, "The manager is criticized by his boss after a presentation", is returned as the final output to this module as it contains the most semantic incongruity with the reversed sentence.
## 3.3 Emoji Prediction
In this module, we use a pre-trained emoji prediction model which is fine tuned on the CLIP (Radford et al. (2021)) deep learning model by OpenAI to predict an emoji from a given input. After concatenating the non-sarcastic input and the context retrieved from the Retrieval of Commonsense module, we predict an emoji based on this concatenated sentence. The model employs a masked self-attention Transformer as a text encoder and a ViT-B/32 Transformer architecture as an image encoder. By using a contrastive loss, these encoders are trained to optimize the similarity of (image, text) pairs. One version of the implementation used a Vision Transformer and the other a ResNet image encoder. The variation with the Vision Transformer is used in this case. The dataset7 used for fine-tuning the model consists of two columns: raw tweets and emoji labels. The emoji labels correspond to the appropriate one among a set of 32 emojis shown in figure 4.
![5_image_0.png](5_image_0.png)
## 4 Experimental Setup
The dataset, model configurations for the different modules, and the evaluation criteria for our work are all discussed in the following sub sections.
## 4.1 Dataset
For our experiments, we utilize the Positive and Negative sentiment corpora by Mishra et al. (2019)
which contains tweets and short snippets. Tweets have been normalized by eliminating hashtags, usernames, and conducting spell checking and lexical normalization using NLTK (Loper and Bird, 2002). After filtering out sentences longer than 30 words and running them through all three modules, we get the final dataset of 2k sarcastic sentences from the Mishra et al. (2019) dataset. We have made our dataset8 publicly available.
## 4.2 Model Configurations
The sentiment classification model of the neutralization module is trained on the sentiment dataset 7https://huggingface.co/datasets/vincentclaes/
emoji-predictor 8https://github.com/WrightlyRong/
Sarcasm-Generation-with-Emoji
| Non-Sarcastic Utterance | System | Sarcastic Utterance | Sarcasticness Creativity | Humor | Grammaticality |
|--------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|----------------------------|---------|------------------|
| Full Model | Happy to be home with the fam. Being incarcerated-under | 3.67 | 4.33 | 4 | 5 |
| the label of being mentally ill. | | | | | |
| Without Emoji | Happy to be home with the fam. Being incarcerated-under the label of being mentally ill. | 3.67 | 4.33 | 3.67 | 5 |
| Without Context | Happy to be home with the fam. | 3.33 | 3 | 3 | 5 |
| R et al., 2020) | Home with the not flu. | 1.67 | 1.33 | 1.33 | 3 |
| 3 (Chakrabarty Full Model | The boss just ended and took the mac away awesome. | 5 | 5 | 4.67 | 4.33 |
| Angry is not the word for it - I was furious. | | | | | |
| Without Emoji | The boss just ended and took the mac away awesome. Angry is not the word for it - I was furious. | 4 | 3.67 | 3 | 4.67 |
| Without Context | The boss just ended and took the mac away awesome. | 5 | 5 | 4.67 | 4.33 |
| R et al., 2020) | The boss just came and took the mac away. Angry is not | | | | |
| 3 (Chakrabarty | the word for it - I was furious. | 1.67 | 2.33 | 1.67 | 5 |
| Full Model | Friday nights are so cute when the boyfriend is working rearrange and then i have to work at on mornings. At least | 4 | 4 | 3.67 | 4 |
| they weren't bored. | | | | | |
| Without Emoji | Friday nights are so cute when the boyfriend is working | 4 | 4 | 3.67 | 4 |
| rearrange and then i have to work at on mornings. At least they weren't bored. | | | | | |
| Without Context | Friday nights are so cute when the boyfriend is working | 4 | 4 | 3.67 | 4 |
| rearrange and then i have to work at on mornings. | | | | | |
| 3 (Chakrabarty R et al., 2020) | Friday nights are so boring when the boyfriend is working early and then I have to work at on saturday mornings. Friday saw the latest addition to darlington's throbbing night life packed to the rafters. | 1.33 | 2 | 1.33 | 5 |
| Full Model | Just finished workin feeling good. My stomach heaved | 5 | 5 | 4.67 | 5 |
| and I felt sick. | | | | | |
| Without Emoji | Just finished workin feeling good. My stomach heaved and I felt sick. | 5 | 5 | 4.67 | 5 |
| Without Context | Just finished workin feeling good. | 3 | 3 | 3 | 5 |
| 3 (Chakrabarty R et al., 2020) | Just finished workin bed feeling healthy. My stomach heaved and I felt sick. | 5 | 4.33 | 4.67 | 5 |
given by Mishra et al. (2019) where the negative sentences are labeled as 1 and the positive sentences are labeled as 0. Each word in the input sentence is first encoded with one-hot encoding and turned into a K-dimensional embedding. Then, these embeddings go through an LSTM layer with 200 hidden units, a self-attention layer, an LSTM
layer with 150 hidden units and finally a softmax layer. The classifier is trained for 10 epochs with a batch size of 32, and achieves a validation accuracy of 96% and a test accuracy of 95.7%.
The positive sentiment induction module is built on top of the OpenNMT 3.0 framework, and following Mishra et al. (2019), the embedding dimensions of the encoder and decoder is set to 500, with 2 LSTM
layers each consisting of 500 hidden units. Training iteration is set to 100000 and early stopping is incorporated to prevent overfitting. After training, the model produced a corpus-BLEU score of 51.3%.
## 4.3 Evaluation Criteria
For evaluating the performance of our proposed architecture we incorporate Human judgement. To assess the quality of the generated dataset we compare among 4 systems.
1. **Full Model** contains all the proposed modules of the framework and generates the final dataset.
2. **Without Emoji** system includes the context sentences along with the outputs from the reversal of valence module but does not contain any emoji that goes with each sarcastic sentence.
3. **Without Context** system consists of generations from the reversal of valence module as well as emoji. It does not include any context.
4. R
3is the state-of-the-art sarcasm generation system proposed by Chakrabarty et al. (2020).
To assess each of the four systems, we randomly choose 100 samples from our sarcastic dataset which totals to 400 output from the four systems.
We evaluate these 400 generated sentences for comparing on the basis of the 4 above mentioned systems.
Following the evaluation approach proposed by Chakrabarty et al. (2020), we evaluate the generated sentences on these criteria:
1. Sarcasticness ("How sarcastic is the output?"),
2. Creativity ("How creative is the output?"),
3. Humour ("How funny is the output?"),
4. Grammaticality ("How grammatically correct is the output?").
Previous studies on sarcasm generation have employed sarcasticness as a criterion for evaluating the effectiveness of the generated outputs (Mishra et al., 2019; Chakrabarty et al., 2020; Das et al.,
2022). As sarcasm exemplifies linguistic creativity
(Gerrig and Gibbs Jr, 1988), creativity has been proposed as a method for operationalizing the quality of sarcastic sentences by Skalicky and Crossley (2018). The association between humor and sarcasm is frequently mentioned in literature as well (Dress et al., 2008; Lampert and Ervin-Tripp, 2006; Leggitt and Gibbs, 2000; Bowes and Katz, 2011). The grammaticality criterion assesses the syntactic accuracy and conformity of the generated sentences.
Three human judges have been chosen to rate the outputs from the 4 systems on the 4 criteria mentioned. The label indicates a rating on a scale of 1 (not at all) to 5 (very). All 3 judges label each of the 400 sentences from the 4 systems. The human judges have been chosen based on their high efficiency in English, good grasp in understanding and differentiating between Creativity, Humor and Sarcasticness in English sentences.
To assess the inter-annotator agreement for the ratings, we incorporated the Intraclass Correlation Coefficient (ICC). ICC is a statistical measure used to assess the degree of agreement or correlation among the ratings given by different evaluators or raters for a certain category or metric. The agreement scores are shown in table 6. The ICC score ranges between 0 and 1 where a higher score indicates a greater agreement among the raters. For all the four systems evaluated in our work, the ratings by 3 judges for the 4 evaluation criteria yield ICC scores above 0.9 in each case. A score above 0.9 indicates highly consistent observations and excellent agreement among the 3 judges.
Besides, human evaluation, we also evaluate our generated data against an emoji-based sarcasm detection model trained with existing emoji-based sarcastic dataset. For this, we utilize the work of Subramanian et al. (2019) and use their proposed sarcasm detection model trained with their dataset.
Their data samples were tweets with emojis scraped from Twitter and were labeled either 1 (sarcastic)
| System | Intraclass Correlation Coefficient (ICC) S C H G | | | |
|----------------------------------------|----------------------------------------------------|------|------|------|
| Full Model | 0.90 | 0.92 | 0.92 | 0.94 |
| Without | 0.95 | 0.96 | 0.95 | 0.92 |
| Emoji Without | 0.93 | 0.94 | 0.94 | 0.93 |
| Context 3 R (Chakrabarty et al., 2020) | 0.97 | 0.97 | 0.97 | 0.97 |
or 0 (non-sarcastic). The model consists of a BiGRU with a text encoder and an emoji encoder. We add 2k non-sarcastic texts with our generated 2k sarcastic texts and test the model with these data.
The model's performance is discussed in section 5.
## 5 Experimental Results & Analysis
| System | Varianceeval | | | |
|-----------------|----------------|------|------|------|
| S | C | H | G | |
| Full Model | 0.62 | 0.59 | 0.60 | 0.96 |
| Without Emoji | 0.74 | 0.73 | 0.65 | 0.96 |
| Without Context | 0.57 | 0.43 | 0.44 | 1.02 |
| 3 | (Chakrabarty | | | |
| R | 1.48 | 1.17 | 1.16 | 0.99 |
| et al., 2020) | | | | |
Table 5 shows the comparison between a few sample sarcastic outputs across the various systems
(our full model, output without the context, output without any emoji and lastly the state-of-the-art model (Chakrabarty et al., 2020) on different measures (Sarcasticness, Creativity, Humor and Grammaticality). Each score is the average rating given by the three human judges. Table 7 shows the variances among each evaluation criterion for each of the four systems. The variances among the four criteria for the system R3are higher than all the other systems.
Table 8 shows the average ratings on 100 samples by human judges for generated sarcastic sentences from the four systems based on the four categories.
Our full model achieves the highest average score among all the systems including the state-of-theart sarcasm generation model by Chakrabarty et al.
(2020) on three of the four categories except Grammaticality. Besides the full model, the without
| System | Sarcasticness | Creativity | Humor | Grammaticality |
|------------------------------|-----------------|--------------|---------|------------------|
| Full Model | 3.44 | 3.29 | 3.16 | 3.72 |
| Without Emoji | 2.77 | 2.83 | 2.69 | 3.7 |
| Without Context | 3.1 | 2.99 | 2.88 | 3.72 |
| 3 (Chakrabarty et al., 2020) | 2.32 | 2.2 | 2.1 | 4.29 |
| R | | | | |
emoji system and without context system also outperform the state-of-the-art on Sarcasticness, Creativity and Humor. Our system lacks in Grammaticality due to the fact that we replace the rule based approach of the reversal of valence module by Chakrabarty et al. (2020) with a deep learning approach which results in a slightly more significant information loss. However, the rule based model performs worse in case of the other three categories as it fails to generalize on all types of sentence structures. It is apparent from the scores that context plays an important role in recognising a sarcastic sentence. Additionally, the notable improvement in the score for full model compared to the without emoji model suggests that emojis obviously help better detect the incongruity that exist in sarcastic utterances.
The emoji based sarcasm detection model by Subramanian et al. (2019) gives an F1-score of 67.28%
and an ROC AUC score of 53.33% on our generated data samples. It is to be noted that the model's training data samples have significantly different sentence structure than the test samples.
## Conclusion
We propose a novel multi-modular framework for sarcasm generation with emoji considering two key characteristics of sarcasm: reversal of valence and semantic incongruity between the sarcastic remark and the context. To generate sarcastic sentences, we first neutralize the input sentence's sentiment and then add positive sentiment to the sentence to reverse its meaning. We also incorporate a relevant emoji and its contextual information to enhance the sarcastic effect. We conclude by evaluating our model using human judgement.
## Limitations
Although our proposed architecture successfully generates emoji-based sarcastic sentences from non-sarcastic texts, in some cases, particularly longer sentences, adding commonsense context does not add much to make it more sarcastic as in such cases, the longer sentences already contain the contextual information. In future, we plan to modify our architecture in a way such that it can identify whether or not adding commonsense context would be necessary.
In our work, we have used COMETDIS
TIL to generate additional commonsense context. So the performance of our proposed architecture heavily depends on the accuracy of COMETDIS
TIL. In future, we would like to find and incorporate better models for generating commonsense context.
The low grammaticality score by our final model is likely to be caused by the insufficient training data for the Positive Sentiment Induction module for which the model could not generalize properly. We believe that there is still room for improvement here by collecting and adding more training samples to improve the model's performance. To further fix the grammatical errors we plan to add another module after the Positive Induction module where the module will use a Transformer based grammar correction model which will take a sentence with bad grammar and output a grammatically correct sentence.
Lastly, our emoji prediction module only predicts one emoji per sentence. However, to make a sentence sarcastic, it is not uncommon to use more than one emoji. Hence, we plan to explore multilabel emoji prediction in the future.
## References
Marc Aguert. 2022. Paraverbal expression of verbal irony: vocal cues matter and facial cues even more.
Journal of Nonverbal Behavior, 46(1):45–70.
Silvio Amir, Byron C Wallace, Hao Lyu, and Paula Carvalho Mário J Silva. 2016. Modelling context with user embeddings for sarcasm detection in social media. *arXiv preprint arXiv:1607.00976*.
Adithya Avvaru, Sanath Vobilisetty, and Radhika Mamidi. 2020. Detecting sarcasm in conversation context using transformer-based models. In *Proceedings of the Second Workshop on Figurative Language* Processing, pages 98–103.
David Bamman and Noah Smith. 2015. Contextualized sarcasm detection on twitter. In proceedings of the international AAAI conference on web and social media, volume 9, pages 574–577.
Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. In *proceedings of the 5th workshop on* computational approaches to subjectivity, sentiment and social media analysis, pages 50–58.
Arup Baruah, Kaushik Das, Ferdous Barbhuiya, and Kuntal Dey. 2020. Context-aware sarcasm detection using bert. In Proceedings of the Second Workshop on Figurative Language Processing, pages 83–87.
Christos Baziotis, Nikos Athanasiou, Georgios Paraskevopoulos, Nikolaos Ellinas, Athanasia Kolovou, and Alexandros Potamianos. 2018. Ntuaslp at semeval-2018 task 2: Predicting emojis using rnns with context-aware attention. arXiv preprint arXiv:1804.06657.
Santosh Kumar Bharti, Korra Sathya Babu, and Sanjay Kumar Jena. 2015. Parsing-based sarcasm sentiment recognition in twitter data. In *2015 IEEE/ACM*
International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 1373–
1380. IEEE.
Santosh Kumar Bharti, Ramkrushna Pradhan, Korra Sathya Babu, and Sanjay Kumar Jena. 2017. Sarcasm analysis on twitter data using machine learning approaches. *Trends in Social Network Analysis*,
pages 51–76.
Santosh Kumar Bharti, Bakhtyar Vachha, RK Pradhan, Korra Sathya Babu, and Sanjay Kumar Jena. 2016.
Sarcastic sentiment detection in tweets streamed in real time: a big data approach. *Digital Communications and Networks*, 2(3):108–121.
Andrea Bowes and Albert Katz. 2011. When sarcasm stings. *Discourse Processes*, 48(4):215–236.
Christian Burgers, Margot Van Mulken, and Peter Jan Schellens. 2012. Verbal irony: Differences in usage across written genres. *Journal of Language and* Social Psychology, 31(3):290–310.
Marius Cristian Buzea, Stefan Trausan-Matu, and Traian Rebedea. 2022. Automatic fake news detection for romanian online news. *Information*, 13(3):151.
Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. Multimodal sarcasm detection in twitter with hierarchical fusion model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2506–2515.
John D Campbell and Albert N Katz. 2012. Are there necessary conditions for inducing a sense of sarcastic irony? *Discourse Processes*, 49(6):459–480.
Spencer Cappallo, Thomas Mensink, and Cees GM
Snoek. 2015. Image2emoji: Zero-shot emoji prediction for visual media. In Proceedings of the 23rd ACM international conference on Multimedia, pages 1311–1314.
Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Muresan, and Nanyun Peng. 2020. R3: Reverse, retrieve, and rank for sarcasm generation with commonsense knowledge. In Annual Meeting of the Association for Computational Linguistics.
Dushyant Singh Chauhan, Gopendra Vikram Singh, Aseem Arora, Asif Ekbal, and Pushpak Bhattacharyya. 2022. An emoji-aware multitask framework for multimodal sarcasm detection. *KnowledgeBased Systems*, 257:109924.
Tanvi Dadu and Kartikey Pant. 2020. Sarcasm detection using context separators in online discourse. In Proceedings of the Second Workshop on Figurative Language Processing, pages 51–55.
Sourav Das, Soumitra Ghosh, Anup Kumar Kolya, and Asif Ekbal. 2022. Unparalleled sarcasm: a framework of parallel deep lstms with cross activation functions towards detection and generation of sarcastic statements. *Language Resources and Evaluation*,
pages 1–38.
Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010.
Semi-supervised recognition of sarcasm in twitter and amazon. In *Proceedings of the fourteenth conference on computational natural language learning*,
pages 107–116.
Yufeng Diao, Hongfei Lin, Liang Yang, Xiaochao Fan, Yonghe Chu, Kan Xu, and Di Wu. 2020. A multidimension question answering network for sarcasm detection. *IEEE Access*, 8:135152–135161.
Ning Ding, Sheng-wei Tian, and Long Yu. 2022. A multimodal fusion method for sarcasm detection based on late fusion. *Multimedia Tools and Applications*,
81(6):8597–8616.
Xiangjue Dong, Changmao Li, and Jinho D Choi. 2020.
Transformer-based context-aware sarcasm detection in conversation threads from social media. arXiv preprint arXiv:2005.11424.
Megan L Dress, Roger J Kreuz, Kristen E Link, and Gina M Caucci. 2008. Regional variation in the use of sarcasm. Journal of Language and Social Psychology, 27(1):71–85.
Ibrahim Abu Farha and Walid Magdy. 2020. From arabic sentiment analysis to sarcasm detection: The arsarcasm dataset. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 32–39.
Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm.
arXiv preprint arXiv:1708.00524.
Elena Filatova. 2012. Irony and sarcasm: Corpus generation and analysis using crowdsourcing. In *Lrec*,
pages 392–398. Citeseer.
Richard J Gerrig and Raymond W Gibbs Jr. 1988. Beyond the lexicon: Creativity in language production.
Metaphor and Symbol, 3(3):1–19.
Richard J Gerrig and Yevgeniya Goldvarg. 2000. Additive effects in the perception of sarcasm: Situational disparity and echoic mention. *Metaphor and Symbol*,
15(4):197–208.
Aniruddha Ghosh and Tony Veale. 2016. Fracking sarcasm using neural network. In *Proceedings of the* 7th workshop on computational approaches to subjectivity, sentiment and social media analysis, pages 161–169.
Aniruddha Ghosh and Tony Veale. 2017. Magnets for sarcasm: Making sarcasm detection timely, contextual and very personal. In *Proceedings of the 2017* Conference on Empirical Methods in Natural Language Processing, pages 482–491.
Debanjan Ghosh, Alexander R Fabbri, and Smaranda Muresan. 2018. Sarcasm analysis using conversation context. *Computational Linguistics*, 44(4):755–792.
Hunter Gregory, Steven Li, Pouya Mohammadi, Natalie Tarn, Rachel Draelos, and Cynthia Rudin. 2020. A
transformer approach to contextual sarcasm detection in twitter. In Proceedings of the second workshop on figurative language processing, pages 270–275.
Raj Kumar Gupta and Yinping Yang. 2017. Crystalnest at semeval-2017 task 4: Using sarcasm detection for enhancing sentiment classification and quantification.
In *Proceedings of the 11th International Workshop* on Semantic Evaluation (SemEval-2017), pages 626–
633.
Devamanyu Hazarika, Soujanya Poria, Sruthi Gorantla, Erik Cambria, Roger Zimmermann, and Rada Mihalcea. 2018. Cascade: Contextual sarcasm detection in online discussion forums. *arXiv preprint* arXiv:1805.06413.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
Suzana Ilic, Edison Marrese-Taylor, Jorge A Balazs, and ´
Yutaka Matsuo. 2018. Deep contextualized word representations for detecting sarcasm and irony. *arXiv* preprint arXiv:1809.09795.
Tanya Jain, Nilesh Agrawal, Garima Goyal, and Niyati Aggrawal. 2017. Sarcasm detection of tweets: A
comparative study. In *2017 Tenth International Conference on Contemporary Computing (IC3)*, pages 1–6. IEEE.
Nikhil Jaiswal. 2020. Neural sarcasm detection using conversation context. In *Proceedings of the Second* Workshop on Figurative Language Processing, pages 77–82.
Soroush Javdan, Behrouz Minaei-Bidgoli, et al. 2020.
Applying transformers and aspect-based sentiment analysis approaches on sarcasm detection. In *Proceedings of the second workshop on figurative language processing*, pages 67–71.
Amit Kumar Jena, Aman Sinha, and Rohit Agarwal.
2020. C-net: Contextual network for sarcasm detection. In *Proceedings of the second workshop on* figurative language processing, pages 61–66.
Aditya Joshi, Diptesh Kanojia, Pushpak Bhattacharyya, and Mark Carman. 2017. Sarcasm suite: a browserbased engine for sarcasm detection and generation.
In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31.
Aditya Joshi, Anoop Kunchukuttan, Pushpak Bhattacharyya, and Mark James Carman. 2015. Sarcasmbot: An open-source sarcasm-generation module for chatbots. In *WISDOM Workshop at KDD*.
Aditya Joshi, Vaibhav Tripathi, Kevin Patel, Pushpak Bhattacharyya, and Mark Carman. 2016. Are word embedding-based features useful for sarcasm detection? *arXiv preprint arXiv:1610.00883*.
A Kalaivani and D Thenmozhi. 2020. Sarcasm identification and detection in conversion context using bert.
In Proceedings of the Second Workshop on Figurative Language Processing, pages 72–76.
Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli.
2017. A large self-annotated corpus for sarcasm.
arXiv preprint arXiv:1704.05579.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. Opennmt: Opensource toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.
Akshi Kumar, Saurabh Raj Sangwan, Anshika Arora, Anand Nayyar, Mohamed Abdel-Basset, et al. 2019.
Sarcasm detection using soft attention-based bidirectional long short-term memory model with convolution network. *IEEE access*, 7:23319–23328.
Amardeep Kumar and Vivek Anand. 2020. Transformers on sarcasm detection with context. In *Proceedings of the second workshop on figurative language* processing, pages 88–92.
Aaron Maladry, Els Lefever, Cynthia Van Hee, and Veronique Hoste. 2022. Irony detection for dutch:
a venture into the implicit. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 172–181.
Avinash Kumar, Vishnu Teja Narapareddy, Pranjal Gupta, Veerubhotla Aditya Srikanth, Lalita Bhanu Murthy Neti, and Aruna Malapati. 2021. Adversarial and auxiliary features-aware bert for sarcasm detection. In 8th ACM IKDD CODS and 26th COMAD, pages 163–170.
Avinash Kumar, Vishnu Teja Narapareddy, Veerubhotla Aditya Srikanth, Aruna Malapati, and Lalita Bhanu Murthy Neti. 2020. Sarcasm detection using multi-head attention based bidirectional lstm. Ieee Access, 8:6388–6397.
Martin D Lampert and Susan M Ervin-Tripp. 2006.
Risky laughter: Teasing and self-directed joking among male and female friends. *Journal of Pragmatics*, 38(1):51–72.
Hankyol Lee, Youngjae Yu, and Gunhee Kim. 2020.
Augmenting data for sarcasm detection with unlabeled conversation context. arXiv preprint arXiv:2006.06259.
John S Leggitt and Raymond W Gibbs. 2000. Emotional reactions to verbal irony. *Discourse processes*,
29(1):1–24.
Bin Liang, Chenwei Lou, Xiang Li, Min Yang, Lin Gui, Yulan He, Wenjie Pei, and Ruifeng Xu. 2022. Multimodal sarcasm detection via cross-modal graph convolutional network. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1767–
1777.
Usman Naseem, Imran Razzak, Peter Eklund, and Katarzyna Musial. 2020. Towards improved deep contextual embedding for the identification of irony and sarcasm. In *2020 International Joint Conference* on Neural Networks (IJCNN), pages 1–7. IEEE.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Shereen Oraby, Vrindavan Harrison, Amita Misra, Ellen Riloff, and Marilyn Walker. 2017. Are you serious?:
Rhetorical questions and sarcasm in social media dialog. *arXiv preprint arXiv:1709.05305*.
Reynier Ortega-Bueno, Paolo Rosso, and José E Medina Pagola. 2022. Multi-view informed attention-based model for irony and satire detection in spanish variants. *Knowledge-Based Systems*, 235:107597.
Chenwei Lou, Bin Liang, Lin Gui, Yulan He, Yixue Dang, and Ruifeng Xu. 2021. Affective dependency graph for sarcasm detection. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1844–1849.
Rajnish Pandey and Jyoti Prakash Singh. 2023. Bertlstm model for sarcasm detection in code-mixed social media post. Journal of Intelligent Information Systems, 60(1):235–254.
Navonil Majumder, Soujanya Poria, Haiyun Peng, Niyati Chhaya, Erik Cambria, and Alexander Gelbukh.
2019. Sentiment and sarcasm classification with multitask learning. *IEEE Intelligent Systems*, 34(3):38–
43.
Diana G Maynard and Mark A Greenwood. 2014. Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis. In Lrec 2014 proceedings. ELRA.
Abhijit Mishra, Tarun Tater, and Karthik Sankaranarayanan. 2019. A modular architecture for unsupervised sarcasm generation. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 6144–6154.
Abdullah Y Muaad, Hanumanthappa Jayappa Davanagere, JV Benifa, Amerah Alabrah, Mufeed Ahmed Naji Saif, D Pushpa, Mugahed A Al-Antari, and Taha M Alfakih. 2022. Artificial intelligence-based approach for misogyny and sarcasm detection from arabic texts. *Computational Intelligence and Neuroscience*, 2022.
Shubhadeep Mukherjee and Pradip Kumar Bala. 2017.
Detecting sarcasm in customer tweets: an nlp based approach. *Industrial Management & Data Systems*.
Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2019. Commongen: A constrained text generation challenge for generative commonsense reasoning. *arXiv preprint arXiv:1911.03705*.
Silviu Oprea, Steven Wilson, and Walid Magdy. 2021.
Chandler: An explainable sarcastic response generator. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing:
System Demonstrations, pages 339–349.
Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. *arXiv preprint cs/0205028*.
Soujanya Poria, Erik Cambria, Devamanyu Hazarika, and Prateek Vij. 2016. A deeper look into sarcastic tweets using deep convolutional neural networks.
arXiv preprint arXiv:1610.08815.
Rolandos-Alexandros Potamias, Georgios Siolas, and Andreas Stafylopatis. 2019. A robust deep ensemble classifier for figurative language detection. In International Conference on Engineering Applications of Neural Networks, pages 164–175. Springer.
Rolandos Alexandros Potamias, Georgios Siolas, and Andreas-Georgios Stafylopatis. 2020. A transformerbased approach to irony and sarcasm detection.
Neural Computing and Applications, 32(23):17309–
17320.
Anukarsh G Prasad, S Sanjana, Skanda M Bhat, and BS Harish. 2017. Sentiment analysis for sarcasm detection on streaming short text data. In 2017 2nd International Conference on Knowledge Engineering and Applications (ICKEA), pages 1–5. IEEE.
Tomáš Ptácek, Ivan Habernal, and Jun Hong. 2014. Sar- ˇ
casm detection on czech and english twitter. In Proceedings of COLING 2014, the 25th international conference on computational linguistics: Technical papers, pages 213–223.
Veranika Puhacheuskaya and Juhani Järvikivi. 2022. I
was being sarcastic!: The effect of foreign accent and political ideology on irony (mis) understanding. *Acta* Psychologica, 222:103479.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pages 8748–8763. PMLR.
Yafeng Ren, Donghong Ji, and Han Ren. 2018. Contextaugmented convolutional neural networks for twitter sarcasm detection. *Neurocomputing*, 308:1–7.
Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013.
Sarcasm as contrast between a positive sentiment and negative situation. In *Proceedings of the 2013* conference on empirical methods in natural language processing, pages 704–714.
Jie Ruan, Yue Wu, Xiaojun Wan, and Yuesheng Zhu.
2022. How to describe images in a more funny way?
towards a modular approach to cross-modal sarcasm generation. *arXiv preprint arXiv:2211.10992*.
Edoardo Savini and Cornelia Caragea. 2022.
Intermediate-task transfer learning with bert for sarcasm detection. *Mathematics*, 10(5):844.
Rossano Schifanella, Paloma De Juan, Joel Tetreault, and Liangliang Cao. 2016. Detecting sarcasm in multimodal social platforms. In *Proceedings of the* 24th ACM international conference on Multimedia, pages 1136–1145.
Boaz Shmueli, Lun-Wei Ku, and Soumya Ray. 2020.
Reactive supervision: A new method for collecting sarcasm data. *arXiv preprint arXiv:2009.13080*.
Stephen Skalicky and Scott Crossley. 2018. Linguistic features of sarcasm and metaphor production quality. In Proceedings of the Workshop on Figurative Language Processing, pages 7–16.
Himani Srivastava, Vaibhav Varshney, Surabhi Kumari, and Saurabh Srivastava. 2020. A novel hierarchical bert architecture for sarcasm detection. In Proceedings of the Second Workshop on Figurative Language Processing, pages 93–97.
Jayashree Subramanian, Varun Sridharan, Kai Shu, and Huan Liu. 2019. Exploiting emojis for sarcasm detection. In *International conference on social computing, behavioral-cultural modeling and prediction and* behavior representation in modeling and simulation, pages 70–80. Springer.
Yi Tay, Luu Anh Tuan, Siu Cheung Hui, and Jian Su. 2018. Reasoning with sarcasm by reading inbetween. *arXiv preprint arXiv:1805.02856*.
Cynthia Van Hee, Els Lefever, and Véronique Hoste.
2018. Semeval-2018 task 3: Irony detection in english tweets. In *Proceedings of The 12th International Workshop on Semantic Evaluation*, pages 39–
50.
Zhiyuan Wen, Lin Gui, Qianlong Wang, Mingyue Guo, Xiaoqi Yu, Jiachen Du, and Ruifeng Xu. 2022.
Sememe knowledge and auxiliary information enhanced approach for sarcasm detection. *Information* Processing & Management, 59(3):102883.
Peter West, Chandra Bhagavatula, Jack Hessel, Jena D
Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2021. Symbolic knowledge distillation: from general language models to commonsense models. *arXiv preprint* arXiv:2110.07178.
Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.
Deirdre Wilson. 2006. The pragmatics of verbal irony:
Echo or pretence? *Lingua*, 116(10):1722–1743.
Chuhan Wu, Fangzhao Wu, Sixing Wu, Zhigang Yuan, Junxin Liu, and Yongfeng Huang. 2018. Thu_ngn at semeval-2018 task 2: Residual cnn-lstm network with attention for english emoji prediction. In *Proceedings of The 12th International Workshop on Semantic Evaluation*, pages 410–414.
Jingjing Xu, Xu Sun, Qi Zeng, Xuancheng Ren, Xiaodong Zhang, Houfeng Wang, and Wenjie Li. 2018.
Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. arXiv preprint arXiv:1805.05181.
Meishan Zhang, Yue Zhang, and Guohong Fu. 2016.
Tweet sarcasm detection using deep neural network.
In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics:
technical papers, pages 2449–2460.
Yazhou Zhang, Dan Ma, Prayag Tiwari, Chen Zhang, Mehedi Masud, Mohammad Shorfuzzaman, and Dawei Song. 2023. Stance-level sarcasm detection with bert and stance-centered graph attention networks. *ACM Transactions on Internet Technology*,
23(2):1–21.
## A Appendix
| Table 9: Summary of sarcasm detection datasets from different social media platforms Dataset Annotation Short Text Long Text Image Samples Platform Manual Hashtag None | | | | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|--------|---------|----------|--------------|----------|---------|
| (Filatova, 2012) | ✓ | 1254 | Amazon | ✓ | | | |
| (Riloff | et | al., | ✓ | 1600 | Twitter | ✓ | |
| 2013) (Ptácek | et | al. | | | | | |
| ˇ | , | ✓ | 920000 | Twitter | ✓ | ✓ | |
| 2014) (Barbieri | et | al., | ✓ | 60000 | Twitter | ✓ | |
| 2014) (Bamman | and | ✓ | 19534 | Twitter | ✓ | | |
| Smith, 2015) (Amir et al., | ✓ | 11541 | Twitter | ✓ | | | |
| 2016) (Bharti | et | al., | ✓ | 1.5M | Twitter | ✓ | |
| 2016) (Joshi | et | al., | ✓ | 3629 | Goodreads | ✓ | |
| 2016) (Ghosh | and | ✓ | 41000 | Twitter | ✓ | | |
| Veale, 2016) (Poria et al., | ✓ | 100000 | Twitter | ✓ | ✓ | | |
| 2016) (Schifanella | ✓ | ✓ | 600925 | Instagram, Tumblr, Twitter | ✓ | | |
| et al., 2016) (Zhang et | al., | ✓ | 9104 | Twitter | ✓ | | |
| 2016) (Felbo | et | al., | ✓ | 1.6B | Twitter | ✓ | |
| 2017) (Ghosh | and | ✓ | 41200 | Twitter | ✓ | | |
| Veale, 2017) (Khodak et al., | ✓ | 533.3M | Reddit | ✓ | | | |
| 2017) (Oraby | et | al., | ✓ | 10270 | Debate forum | ✓ | ✓ |
| 2017) (Prasad | et | al., | ✓ | 2000 | Twitter | ✓ | |
| 2017) (Baziotis | et | al., | ✓ | 550M | Twitter | ✓ | |
| 2018) (Hazarika et al., | ✓ | 219368 | Reddit | ✓ | | | |
| 2018) (Ghosh | et | al., | ✓ | ✓ | 36391 | Twitter, | Reddit, |
| 2018) | Discussion Forum | ✓ | ✓ | | | | |
| (Ilic et al. ´ , 2018) | ✓ | ✓ | 419822 | Twitter, | Reddit, | ✓ | ✓ |
| Debate Forum | | | | | | | |
| 348 | | | | | | | |
| (Tay et al., 2018) | ✓ | ✓ | 94238 | Twitter, | Reddit, | ✓ | ✓ | |
|--------------------------|--------------|--------|-----------------|-----------------|-----------|----------|---------|----|
| Debate Forum | | | | | | | | |
| (Van Hee et al., | ✓ | 4792 | Twitter | ✓ | ✓ | | | |
| 2018) (Wu et al., 2018) | ✓ | 4618 | Twitter | ✓ | ✓ | | | |
| (Majumder et al., | ✓ | 994 | Twitter | ✓ | | | | |
| 2019) (Cai et al., 2019) | ✓ | 24635 | Twitter | ✓ | | | | |
| (Kumar | et | al., | ✓ | ✓ | 24635 | Twitter, | Reddit, | ✓ |
| 2019) | Debate Forum | | | | | | | |
| (Subramanian | ✓ | ✓ | 12900 | Twitter, | Face | | | |
| book | ✓ | | | | | | | |
| et al., 2019) (Jena et | al., | ✓ | 13000 | Twitter, Reddit | ✓ | ✓ | | |
| 2020) (Potamias et al., | ✓ | 533.3M | Twitter, Reddit | ✓ | ✓ | | | |
| 2020) | | | | | | | | |
| Data | Architecture | Performance | | | | | | |
|----------------------------------|-----------------|----------------|------------|-------------------|--------|-------|-------|------|
| Accuracy | F1- | Precision | Recall | | | | | |
| Score | | | | | | | | |
| (Davidov | Tweets | SASI | (Semi | | | | | |
| supervised | Algo | | | | | | | |
| rithm for Sarcasm | | | | | | | | |
| et al., 2010) | Identification) | 0.896 | 0.545 | 0.727 | 0.436 | | | |
| (Gupta | and | Tweets | CrystalNet | 0.60 | 0.52 | 0.70 | | |
| Yang, 2017) (Bharti et al., | Tweets | PBLGA with SVM | 0.67 | 0.67 | 0.68 | | | |
| 2017) (Mukherjee and Bala, 2017) | Tweets | Naive Bayes | 0.73 | | | | | |
| (Jain | et | al., | Tweets | Weighted Ensemble | 0.853 | 0.831 | 0.298 | |
| 2017) (Poria et al., | Tweets | CNN-SVM | 0.9771 | | | | | |
| 2016) (Ghosh and | Tweets | CNN-LSTM-DNN | 0.901 | 0.894 | 0.912 | | | |
| Veale, 2016) (Zhang | Tweets | GRNN | 0.9074 | 0.9074 | | | | |
| et al., 2016) (Oraby et al., | Tweets | SVM | + | W2V | + | 0.83 | 0.80 | 0.86 |
| 2017) | LIWC | | | | | | | |
| (Hazarika | Reddit | CASCADE | 0.79 | 0.86 | | | | |
| et al., 2018) | posts | | | | | | | |
| (Ren | et | al., | Tweets | CANN-KEY | 0.6328 | | | |
| 2018) | CANN-ALL | 0.6205 | | | | | | |
| (Tay | et | al., | Tweets, | | | |
|-------------------------------------|--------------------|-----------------|-----------|--------|--------|--------|
| 2018) | Reddit posts | MIARN | Twitter: | 0.86 | 0.8613 | 0.8579 |
| 0.8647 Reddit: | 0.6922 | 0.6935 | 0.7005 | | | |
| 0.6091 | | | | | | |
| (Ghosh | Reddit | multiple-LSTM | 0.7458 | 0.7607 | 0.7762 | |
| et al., 2018) | posts | | | | | |
| (Diao et al., | Internet arguments | | | | | |
| 2020) | MQA | (Multi | | | | |
| dimension Question Answering model) | 0.762 | 0.701 | 0.835 | | | |
| (Kumar | Reddit | MHA-BiLSTM | 0.7748 | 0.7263 | 0.8303 | |
| et al., 2020) | posts | | | | | |
| (Kumar | Tweets | sAtt-BiLSTM convNet | 0.9371 | | | |
| et al., 2019) (Majumder | Text snippets | Multi | task | learn | | |
| ing with fusion and | | | | | | |
| et al., 2019) | shared attention | 0.866 | 0.9101 | 0.9074 | | |
| (Potamias | reviews | | | | | |
| et al., 2019) | of | lap | | | | |
| tops | and | | | | | |
| restaurants | DESC | (Deep | En | | | |
| semble Soft Classifier) | 0.74 | 0.73 | 0.73 | 0.73 | | |
| (Srivastava | Tweets, | | | | | |
| et al., 2020) | Reddit posts | BERT + BiLSTM + | Twitter: | | | |
| CNN | 0.74 Reddit: 0.639 | | | | | |
| (Gregory | Tweets, | | | | | |
| et al., 2020) | Reddit posts | Transformer | en | | | |
| semble | (BERT, | | | | | |
| RoBERTa, | XLNet, | | | | | |
| RoBERTa-large, and ALBERT) | 0.756 | 0.758 | 0.767 | | | |
| (Potamias | Tweets, | | | | | |
| et al., 2020) | Reddit politics | RCNN-RoBERTa | Twitter: | 0.90 | 0.90 | 0.90 |
| 0.91 Reddit: | 0.78 | 0.78 | 0.78 | | | |
| 0.79 | | | | | | |
| (Javdan | Tweets | LCF-BERT | 0.73 | | | |
| et al., 2020) | Reddit | BERT-base-cased | 0.734 | | | |
| posts | | | | | | |
| (Lee | et | al., | Tweets, | | | |
| 2020) | Reddit posts | BERT + BiLSTM + | Twitter | 0.8977 | 0.8747 | 0.9219 |
| NeXtVLAD | Reddit | 0.7513 | 0.6938 | 0.8187 | | |
| (Baruah | Tweets, | | | | | |
| et al., 2020) | Reddit posts | BERT-largeuncased | Twitter | 0.743 | 0.744 | 0.748 |
| Reddit | 0.658 | 0.658 | 0.658 | | | |
| (Avvaru | Tweets, | | | | | |
|-------------------------|----------------------|-----------------|----------|--------|-------|-------|
| et al., 2020) | Reddit posts | BERT | Twitter | 0.752 | | |
| Reddit | 0.621 | | | | | |
| (Jaiswal, | Tweets, | | | | | |
| 2020) | Reddit posts | Ensemble | of | sev | | |
| eral | combinations | | | | | |
| of RoBERTa-large | 0.790 | 0.790 | 0.792 | | | |
| (Shmueli | Tweets | BERT | 0.703 | 0.699 | 0.70 | |
| et al., 2020) | 0.7741 | | | | | |
| (Dadu | and | Tweets, | | | | |
| Pant, 2020) | Reddit posts | RoBERTa-large | Twitter | 0.772 | 0.772 | 0.772 |
| Reddit | 0.716 | 0.716 | 0.718 | | | |
| (Kalaivani and Thenmozhi, 2020) | Tweets, Reddit posts | BERT | Twitter | 0.722 | 0.722 | 0.722 |
| Reddit | 0.679 | 0.679 | 0.679 | | | |
| (Naseem | Tweets | T-DICE + BiLSTM | 0.93 | 0.93 | | |
| et al., 2020) | + ALBERT | | | | | |
| (Dong et al., | Tweets, | | | | | |
| 2020) | Reddit posts | context-aware | Twitter | 0.783 | 0.784 | 0.789 |
| RoBERTa-large | Reddit | 0.744 | 0.745 | 0.749 | | |
| (Kumar and Anand, 2020) | Tweets, Reddit posts | context-aware | Twitter | 0.772 | 0.773 | 0.774 |
| RoBERTa-large | Reddit | 0.691 | 0.693 | 0.699 | | |
| (Kumar | Tweets | AAFAB (Adversarial and Auxiliary | | | | |
| et al., 2021) | Features-Aware BERT) | 0.7997 | 0.8101 | 0.7896 | | |
| (Lou | et | al., | Tweets, | | | |
| 2021) | Reddit posts | ADGCN-BERT (Affective Dependency Graph Convolutional Network) | Twitter: | 0.8954 | | |
| 0.9031 Reddit: | 0.8077 | | | | | |
| 0.8077 | | | | | | |
|
schmidtova-2023-semantic | Semantic Accuracy in Natural Language Generation: A Thesis Proposal | https://aclanthology.org/2023.acl-srw.48 | With the fast-growing popularity of current large pre-trained language models (LLMs), it is necessary to dedicate efforts to making them more reliable. In this thesis proposal, we aim to improve the reliability of natural language generation systems (NLG) by researching the semantic accuracy of their outputs. We look at this problem from the outside (evaluation) and from the inside (interpretability). We propose a novel method for evaluating semantic accuracy and discuss the importance of working towards a unified and objective benchmark for NLG metrics. We also review interpretability approaches which could help us pinpoint the sources of inaccuracies within the models and explore potential mitigation strategies. | # Semantic Accuracy In Natural Language Generation: A Thesis Proposal
Patrícia Schmidtová Charles University, Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Prague, Czech Republic [email protected]
## Abstract
With the fast-growing popularity of current large pre-trained language models (LLMs), it is necessary to dedicate efforts to making them more reliable. In this thesis proposal, we aim to improve the reliability of natural language generation systems (NLG) by researching the semantic accuracy of their outputs. We look at this problem from the outside (evaluation) and from the inside (interpretability). We propose a novel method for evaluating semantic accuracy and discuss the importance of working towards a unified and objective benchmark for NLG metrics. We also review interpretability approaches which could help us pinpoint the sources of inaccuracies within the models and explore potential mitigation strategies.
## 1 Introduction
The introduction of the Transformer architecture
(Vaswani et al., 2017) irreversibly changed the research landscape in natural language processing.
Moreover, in the past year, large pre-trained language models (LLMs) have managed to permeate into the hands and minds of millions of users worldwide (Ouyang et al., 2022; Touvron et al., 2023; Scao and et al., 2023). With a growing public interest in natural language generation (NLG) and dialogue systems, it is essential to thoroughly research their reliability. If a human does not know the answer to a question, the socially acceptable behavior is to say 'I do not know' instead of making up a plausibly sounding lie. This is how many users expect intelligent systems to behave, and failing to fulfill this expectation can lead to distrust, or in a worse scenario, even to the spread of misinformation.
We believe it is worth trying to propose evaluation schemes that could incentivize institutions and companies to optimize their models for reliability rather than just fluency and impressiveness. The proposed thesis aims to take a step in this direction by investigating semantic accuracy in a data-to-text generation setting. We consider a text *semantically* accurate if it faithfully represents the underlying input data.
Despite the fact that inaccurate does not always mean wrong (Maynez et al., 2020), i.e. conflicting with our current understanding of the world, we argue that an NLG system should produce semantically accurate texts to be considered reliable. We still consider it important to research NLG through the lens of semantic accuracy, without the intent of explicitly fact-checking (Thorne et al., 2018), for the following reasons:
- It is important to alert the user about the output text deviating from the data so they do not overlook it and can evaluate the factuality themselves.
- The NLG system stores a representation of its training data in its parameters. However, some of that information might be outdated and therefore is no longer accurate. If we supply an NLG system with input data containing updated information, such as the name of a new prime minister, we want this to take precedence over the information learned during training.
- In some use cases, such as in task-oriented dialogue systems, we want full control of the output to maintain a high level of reliability.
This is especially important if explicit dialogue state tracking is used so that the system has an accurate representation of what was already communicated to the user.
Thesis Objectives The main objective of this thesis is to answer the question: "How can we make data-to-text Natural Language Generation more reliable?" We hope to achieve this objective by carefully studying NLG systems, namely LLMs, with respect to semantic accuracy, from the outside 352
(evaluating their outputs) as well as from the inside
(inspecting their hidden layers).
It is valuable to quantify how reliable an NLG
system is before attempting to increase its reliability to measure the magnitude of such an increase.
Furthermore, we hope to provide insights into the operation of NLG systems and the limitations they have. This will allow for a more informed design of NLG systems to tackle the detected problems.
Thesis Structure The first part of the thesis, described in Section 2, is dedicated to NLG evaluation. We propose a novel approach for evaluating the semantic accuracy of a generated text given the source data. We also intend to contribute a benchmarking dataset for evaluating NLG metrics focused on semantic accuracy. Thomson and Reiter (2021) have presented such a dataset with highquality human annotations, however, due to the high costs of human annotation it is very modest in size. Therefore, we share our idea of constructing a larger dataset automatically.
In the second part of the thesis, described in Section 3, we will use interpretability techniques to explore where inaccuracies appear. We aim to then use these insights to learn how to guide the NLG
system to produce outputs that are more faithful to the input data.
Applications This thesis' most visible contribution will be in the task of data-to-text natural language generation as it is our primary goal. We anticipate our insights will also be helpful in dialogue systems and retrieval-augmented generation
(Lewis et al., 2020). Furthermore, it is our intention to extend the described approaches to abstractive summarization as the task is similar to ours. Finally, we believe that the evaluation method presented in Section 2 could even be used for evaluating humanwritten texts. While it is not intended as a factchecking method by itself, it could be used as an aid for users who perform fact-checking to warn them about text parts not consistent with the data.
## 2 Evaluating Semantic Accuracy
Many aspects of NLG system outputs can be evaluated: fluency, grammatical correctness, acceptability with respect to a context, or similarity to a given reference text, etc (Howcroft et al., 2020). In this thesis, we focus solely on the aspect of semantic accuracy which is far from being solved.
We aspire to evaluate how accurately a target text represents given source data either in a set of semantic triples (subject-predicate-object), a table, or a different structured form. Our proposed output is not only the numeric result of the metric which can be used in a development or research setting, but primarily a set of alignments between the text and the data (Dou and Neubig, 2021) This will allow for an intuitive visualization for a user in a fact-checking setting.
We consider three major types of semantic inaccuracy, following Maynez et al. (2020) The first is **extrinsic hallucination** - a phenomenon where the text includes additional information that is not directly inferrable from the input data, such as introducing new entities. The second and more subtle way of introducing semantic inaccuracy is **intrinsic hallucination** - creating new relations between entities that are not described in the input data.
Finally, we consider **omission** - omitting some information from the source data in the target text.
## 2.1 Sota In Semantic Accuracy Evaluation
We review state-of-the-art semantic accuracy metrics and discuss the limitations we aim to address in our work. We refer to Celikyilmaz et al. (2020)
and Sai et al. (2022) for a broader overview.
Metrics such as BERTScore (Zhang et al., 2020),
Bleurt (Sellam et al., 2020), or PARENT (Dhingra et al., 2019) can be used to evaluate the semantic accuracy of a given text. The major difference between these metrics and the method we propose later on in this section is that instead of comparing the target text with the source data, they compare it with a reference text. This means the methods can only be applied to examples where a reference is available. Furthermore, such metrics cannot explain why a text received a high or a low score –
they can only measure the proximity to a reference.
The majority of metrics for evaluating the semantic accuracy of generated text utilize models pre-trained for the task of Natural Language Inference (NLI). Such metrics include NUBIA (Kane et al., 2020), MENLI (Chen and Eger, 2023), and approaches presented by Maynez et al. (2020) and Dušek and Kasner (2020).
The advantage of NLI-based metrics is that they generally do not need a reference (with the exception of NUBIA) and can handle lexical diversity. However, they are not easily interpretable by the user, because they natively do not show where the inaccuracies occur within the text. A work by Goyal and Durrett (2020) mitigates this by applying entailment to dependency trees. This method is not equipped to deal with negation and omission which we aim to address in our work.
Finally, we review a text-level error detection metric for table-to-text generation presented by Kasner et al. (2021). This metric uses rules to construct a set of sentences that can be derived from the input data and measure the semantic similarity between them and the evaluated sentence. We aspire to reach a better result by crafting a synthetic pre-training set containing more intricate hallucinations as described later on in this section.
## 2.2 Metric Evaluation
To our knowledge, there is not yet an objective way of evaluating how well semantic accuracy metrics perform in finding inaccurate information. We might not fully achieve objective evaluation of metrics but we argue it is important to move towards this goal as it will lead to better evaluation methods. The most prevalent method of measuring metric performance is comparing the scores given to selected evaluated examples to human judgment.
However, such evaluation is not easily reproducible and does not give us enough information to compare the metrics among themselves (Belz et al.,
2021).
Data-to-text datasets such as WebNLG (Gardent et al., 2017), Enriched WebNLG (Castro Ferreira et al., 2018), DART (Nan et al., 2021) are not sufficient for benchmarking evaluation metrics. As datasets intended as NLG system data, they generally do not contain phenomena like hallucination, but in the rare cases when they do, they are not marked as such. The closest to our goals is the dataset presented by Thomson and Reiter (2021)
intended for error detection in table-to-text generation. It contains high-quality human annotation at the drawback of being small in size - 90 examples across train and validation sets combined. Maynez et al. (2020) created such a dataset for the task of abstractive summarization by extending the XSum dataset (Narayan et al., 2018). They conducted a human annotation experiment to tag hallucinations in the generated summaries. While we hope we can extend our evaluation method to abstractive summarization, this dataset is not directly suitable for evaluating data-to-text generation. A similar benchmarking dataset is available for dialogue systems
(Dziri et al., 2022). This dataset contains annotations with manually evaluated judgments about whether a system response is fully attributable to a relevant large unstructured source of information.
Such task is out of scope for this thesis.
To create a unified way of evaluating and comparing NLG metric performance, we propose a construction of a dataset designed for data-to-text metric evaluation which will contain examples of semantically accurate texts, both extrinsic and intrinsic hallucination, and omission. This will allow for a fine-grained diagnostic of the metric performance in a fully automated setting.
A portion of the data-to-text datasets mentioned above will serve as positive examples containing no hallucinations or omissions. Hallucinations could be automatically generated by dropping semantic triples. We selected this format as our starting point for several reasons:
- It is widely used in the datasets we considered.
- Other formats (tables, graphs, name-value slot pairs) can be losslessly transferred to semantic triples.1
In case we drop a triple where both the subject and object are included in other triples, we are creating an intrinsic hallucination, since the only thing being removed is the relation between the two. Otherwise, we are creating an extrinsic hallucination.
Generating examples of omission could be done by dropping a sentence from the reference text whenever there are more sentences. More intricate examples could be generated by dropping a subtree from the dependency tree of the reference.
A portion of the dataset should also include categorized outputs produced by various NLG systems.
This will ensure that the metric itself is properly evaluated on the data it was designed for. There is no scarcity of erroneous NLG outputs, however, the bottleneck will be the need for human annotation and categorization. For this reason, we intend to start with a small set of such data and slowly expand it.
Creating such a benchmarking dataset would help us compare the performance of existing metrics on the three categories of inaccuracies and to understand their limits.
1We consider graphs as tuples G = (*V, E*) where V is a set of vertices and E is a set of edges. We propose that the edges can be converted to predicates and vertices can be converted to subjects and objects in the semantic triples.
## 2.3 Evaluation Method
We propose a novel method to evaluate semantic accuracy based on alignments between source data and target text. Using the alignment method introduced by Dou and Neubig (2021), we intend to align portions of the data, e.g. semantic triples, to phrases in the target text. To reach phrase-level granularity, we aim to use dependency trees - inspired by the work of Vamvas and Sennrich (2022)
and Goyal and Durrett (2020).
If a portion of the data cannot be aligned with any combination of the phrases, it means the information was omitted. On the other hand, if a phrase cannot be aligned with any portion of the data, it is likely indicative of a hallucination. We are aware this could also happen with filler words or phrases. We can handle such cases during dependency parsing or filter them through their perplexity - filler phrases generally have a lower perplexity than information-bearing phrases.
The main output of this method is the set of alignments that can be used to flag any suspicious parts.
However, in a development setting, it is desirable to have a numerical output quantifying the quality of an evaluated system. This can be obtained either as a total distance between the aligned embeddings in the embedding space or the percentage of embeddings not aligned. Both scores can be normalized for sequence length.
The advantage of this method is that it allows us to track the source of all information in the target text, not only the inaccurate parts. This can be useful in a setting where the alignments are presented directly to the user because if visualized properly, it could make fact-checking faster and easier.
Expected Qualities We aspire for the evaluation method to have the following qualities:
- **Explainable** Instead of just outputting a numerical value to characterize the accuracy of a target text given the source data, it also identifies the hallucination spans. Therefore, it should be able to point out precisely which parts of the text are not supported by the data or which parts of the data were omitted from the text.
- **Reference-less** The metric is designed to evaluate novel texts where no reference text is available. This corresponds to the task of quality estimation (Dušek et al., 2019; Specia et al., 2013). While this might seem like
a limitation, recent work by Kocmi and Federmann (2023) shows that neural metrics are capable of reaching better results when not presented with a reference.
- **Robust** The metric is robust with respect to lexical diversity. The choice of words should not matter as long as they are semantically similar. We expect to approach this quality by working with embeddings rather than ngrams.
- **Automatic** While the metric can be used to help a user, it should not require any input from the user.
Alternative Approach as Tagging Finding hallucinations and omissions in the text can also be approached as a BIO tagging problem (Ramshaw and Marcus, 1995). In our case, we aim to classify every token as the beginning of a hallucination or omission. This approach has been previously explored on a more narrow task of error detection
(Kasner et al., 2021) trained on data from Thomson and Reiter (2021).
We believe that training a BIO tagger could benefit from our proposed benchmarking dataset from Section 2 could be used for training such a tagger.
The hallucination and omission spans can then be automatically annotated using the alignments from our main evaluation method. Even in case the alignments prove to be worse quality than anticipated, we will investigate whether adding this data as a pre-training step and then refining on high-quality data from Thomson and Reiter (2021) will lead to better performance.
## 3 Mitigating Inaccuracies With Interpretability
In the second part of the thesis, we will use various techniques to uncover the sources of semantic inaccuracies within networks. We will then use the gained knowledge to improve the semantic accuracy of the generated text.
In the first subsection, we discuss the methods we intend to explore. In the second subsection, we name the research questions we seek to answer.
## 3.1 Methods
We will investigate LLMs with openly accessible weights (Touvron et al., 2023; Taori et al., 2023; Chung et al., 2022; Wang et al., 2022). In our experiments, we will aim to always have a mixture of encoder-decoder models vs decoder-only models, to explore whether the model architecture makes a difference. We will also compare models fine-tuned on instructions to those that were not to investigate whether this training schema is beneficial in increasing semantic accuracy.
Attention Visualization The first step in our search for semantic inaccuracies is using Attention Visualization (Vig, 2019). The goal is to look for an intuitive insight into what happens inside the networks while inaccuracies are generated. We will search for any reoccurring patterns that can be addressed by pruning. We bear in mind that the results might be hard to interpret or even misleading (Marecek et al. ˇ , 2020; Wiegreffe and Pinter, 2019). Nevertheless, we consider this method a good place to start in our interpretability research.
Probing We anticipate that the major part of our analysis will be done using probing (Ettinger et al.,
2016; Adi et al., 2017; Conneau et al., 2018). Probing aims to extract information from the network's hidden layers by applying a classifier of an investigated linguistic phenomenon on top of them.
In this thesis, we will mostly be interested in extracting graph structures as we are equally interested in entities (nodes) and relations among them
(edges). This will be inspired by extracting syntactic properties (Hewitt and Manning, 2019), and discourse structures (Huber and Carenini, 2022)
from hidden layers. The core idea of both works is applying linear transformations to the activations, considering the result as a distance metric which was then applied to construct trees directly or using dynamic programming.
Our idea of utilizing this approach is to extract the structures in a similar manner and to try to match them to the input data. This can be done on multiple levels to look for the precise point when a hallucination forms by the introduction of new information into the structure or when a part of the input data is forgotten.
We also plan to build upon the work of Schuster and Linzen (2022), who show that Transformerbased models do not yet have entity tracking capabilities and can introduce new entities, which is an instance of extrinsic hallucination (Schmidtova, 2022). Klafka and Ettinger (2020) use probing to obtain information about the surrounding words from a given word. This approach could help us reveal intrinsic hallucination in case we retrieve information about a predicate not supported by the data. We will also look into probing via prompting an LLM (Li et al., 2022) as this approach does not require a trained probe.
Pruning After identifying a potential source of inaccuracy, one of the most natural mitigation strategies is attention head pruning - removing some of the attention heads after training. Voita et al. (2019) and Behnke and Heafield (2020) observed a comparable model performance in machine translation before and after strategically pruning attention heads.
Our aim is to identify attention heads that consistently contribute to hallucination via copying from the training data instead of attending to the input data via attention visualization and probing. In case we succeed, there is a possibility of improving a model's semantic accuracy by pruning those heads.
Fine-tuning Fine-tuning a large pre-trained language model can be computationally very demanding. Most LLMs which achieve state-of-the-art results are simply too large to fine-tune using traditional methods on hardware accessible to a Ph.D. student. Therefore, we aim to explore methods such as LoRA (Hu et al., 2021) and QLoRA
(Dettmers et al., 2023) to fine-tune LLMs using the available data-to-text generation datasets to reach higher semantic accuracy.
Furthermore, in case we find recurring hallucination patterns through attention visualization and probing, we can use the matrix injection method described by Hu et al. (2021) to remove hallucinations before they can even appear in the generated text.
Modelling Uncertainty In case a model is not confident enough in its answer, it should rather say
'I don't know' instead of hallucinating a plausiblesounding response. Goldberg (2023) argues that such behavior cannot be learned in a supervised manner, as we ourselves do not know what knowledge is stored in the model.
We aim to explore Bayesian methods to estimate the model uncertainty. Wu et al. (2022) model aleatory (data) and epistemic (model) uncertainty
(Kiureghian and Ditlevsen, 2009) to detect out-ofdomain queries fed to dialogue systems. Our intentions are the opposite - instead of using this method on the system inputs, we aim to focus on the outputs. We intend to leverage this method is to model epistemic uncertainty and use the modeled values to update the system weights.
We believe this will be a promising research area as this is the kind of interaction humans intuitively expect.
Prompt Engineering The performance of LLMs largely depends on the prompts they receive. We will investigate to what extent prompt choice can influence the semantic accuracy of the produced texts.
There are already many strategies and courses for prompt engineering (Bach et al., 2022; Sanh et al.,
2022; Liu et al., 2021; Ng and Fulford, 2023), however, the suggested strategies for hallucination mitigation are often not very effective. We will seek the boundaries of semantic accuracy that can be achieved through prompt engineering.
We aim to experiment with zero-shot prompting
(Chang et al., 2008; Palatucci et al., 2009), fewshot prompting (Brown et al., 2020), and chain-ofthought prompting (Wei et al., 2023). We are aware that a prompt that will mitigate hallucinations for one model might not be so successful for another one and we are willing to modify the prompts for specific models. We plan to experiment with many aspects of the prompt such as sentence length, unambiguity, word choice, using placeholders, special symbols as delimiters etc.
The advantage of prompt engineering is that the results will be applicable immediately. We expect to observe a wide range in LLM performance based on prompt choice.
## 3.2 Research Questions
Through our interpretability research, we aim to answer the following questions:
- Are there reoccurring patterns in attention that appear when the model is hallucinating?
- Can we use probing to identify the layers where hallucinated information infiltrates the input data?
- Is it possible to teach the network to estimate its confidence in a fact before replying? Would such confidence be reliable or arbitrary?
- Is it possible to minimize the influence of the prompt on semantic accuracy by manipulating the model by fine-tuning, pruning attention heads, or using reinforcement learning to estimate model confidence?
- How significantly can we increase semantic accuracy through modifying the model's inner properties (weight updates, skip connections, or attention head pruning) compared to the increase we can achieve through less resourceintensive prompt engineering?
## 4 Conclusion
This thesis proposal has outlined the importance of investigating semantic accuracy in natural language generation. By focusing on this important aspect, we aim to address the challenge of ensuring that NLG systems generate text that represents the underlying data more faithfully.
We proposed a unified benchmark for NLG metrics focusing on semantic accuracy, which will enable researchers to compare them in an objective and standardized manner. Additionally, we introduced a novel semantic accuracy evaluation method, which measures how accurately the generated text represents the underlying data while also providing data-text alignments.
Furthermore, we discussed ways to investigate where inaccuracies appear inside NLG models, with the aim of identifying potential areas for improvement. Our proposed approach includes attention visualization and probing, which provide insights into the decision-making process of the models and enhance their interpretability. The mitigation strategies we aim to use with this knowledge are attention head pruning, fine-tuning, and updating the weights using estimated uncertainty. We also aim to explore how prompt engineering can contribute to more semantically accurate texts.
We hope our research will lead to improved communication between humans and machines, enhanced user experiences, and more trust from the public.
Challenges There is a possibility that certain LLMs may have already encountered the development and testing portions of the datasets that we plan to use for evaluation during their training process. We will be very mindful of this while conducting all evaluations and aim to use training data extraction techniques (Carlini et al., 2021) to verify whether this is the case for a particular set of data and a given LLM. However, searching for new unseen data will be challenging and is definitely something that should be addressed by a wider scientific community.
## Acknowledgements
This research was supported by SVV 260575 and by the European Research Council (Grant agreement No. 101039303 NG-NLG). I would like to thank Ondˇrej Dušek, Mateusz Lango, Tom Kocmi, and the anonymous reviewers for their helpful feedback.
## References
Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks.
In *International Conference on Learning Representations*.
Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Fries, Maged Alshaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022. PromptSource: An integrated development environment and repository for natural language prompts. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics: System Demonstrations, pages 93–104, Dublin, Ireland. Association for Computational Linguistics.
Maximiliana Behnke and Kenneth Heafield. 2020. Losing heads in the lottery: Pruning transformer attention in neural machine translation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2664–2674, Online. Association for Computational Linguistics.
Anya Belz, Anastasia Shimorina, Shubham Agarwal, and Ehud Reiter. 2021. The ReproGen shared task on reproducibility of human evaluations in NLG:
Overview and results. In Proceedings of the 14th International Conference on Natural Language Generation, pages 249–258, Aberdeen, Scotland, UK.
Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA. Curran Associates Inc.
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633–2650. USENIX Association.
Thiago Castro Ferreira, Diego Moussallem, Sander Wubben, and Emiel Krahmer. 2018. Enriching the webnlg corpus. In *Proceedings of the 11th International Conference on Natural Language Generation*,
INLG'18, Tilburg, The Netherlands. Association for Computational Linguistics.
Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao.
2020. Evaluation of text generation: A survey.
CoRR, abs/2006.14799.
Ming-Wei Chang, Lev Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: Dataless classification. In *Proceedings of the* 23rd National Conference on Artificial Intelligence -
Volume 2, AAAI'08, page 830–835. AAAI Press.
Yanran Chen and Steffen Eger. 2023. Menli: Robust evaluation metrics from natural language inference.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei.
2022. Scaling instruction-finetuned language models.
Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!\#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms.
Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, MingWei Chang, Dipanjan Das, and William Cohen. 2019.
Handling divergent reference texts when evaluating table-to-text generation. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 4884–4895, Florence, Italy. Association for Computational Linguistics.
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European
Chapter of the Association for Computational Linguistics: Main Volume, pages 2112–2128, Online.
Association for Computational Linguistics.
Ondˇrej Dušek and Zdenek Kasner. 2020. ˇ Evaluating semantic accuracy of data-to-text generation with natural language inference. In *Proceedings of the 13th* International Conference on Natural Language Generation, pages 131–137, Dublin, Ireland. Association for Computational Linguistics.
Ondˇrej Dušek, Karin Sevegnani, Ioannis Konstas, and Verena Rieser. 2019. Automatic quality estimation for natural language generation: Ranting (jointly rating and ranking). In *Proceedings of the 12th International Conference on Natural Language Generation*,
pages 369–376, Tokyo, Japan. Association for Computational Linguistics.
Nouha Dziri, Hannah Rashkin, Tal Linzen, and David Reitter. 2022. Evaluating attribution in dialogue systems: The BEGIN benchmark. Transactions of the Association for Computational Linguistics, 10:1066–
1083.
Allyson Ettinger, Ahmed Elgohary, and Philip Resnik.
2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134–139, Berlin, Germany. Association for Computational Linguistics.
Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for NLG micro-planners. In *Proceedings* of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 179–188. Association for Computational Linguistics.
Yoav Goldberg. 2023. Reinforcement learning for language models. Accessed on May 3rd, 2023.
Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment.
In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3592–3603, Online.
Association for Computational Linguistics.
John Hewitt and Christopher D. Manning. 2019. A
structural probe for finding syntax in word representations. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics.
David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020.
Twenty years of confusion in human evaluation: NLG
needs evaluation sheets and standardised definitions.
In Proceedings of the 13th International Conference on Natural Language Generation, pages 169–182, Dublin, Ireland. Association for Computational Linguistics.
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models.
Patrick Huber and Giuseppe Carenini. 2022. Towards understanding large-scale discourse structures in pretrained and fine-tuned language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2376–2394, Seattle, United States. Association for Computational Linguistics.
Hassan Kane, Muhammed Yusuf Kocyigit, Ali Abdalla, Pelkins Ajanoh, and Mohamed Coulibali. 2020. NUBIA: NeUral based interchangeability assessor for text generation. In *Proceedings of the 1st Workshop* on Evaluating NLG Evaluation, pages 28–37, Online (Dublin, Ireland). Association for Computational Linguistics.
Zdenek Kasner, Simon Mille, and Ond ˇ ˇrej Dušek. 2021.
Text-in-context: Token-level error detection for tableto-text generation. In *Proceedings of the 14th International Conference on Natural Language Generation*, pages 259–265, Aberdeen, Scotland, UK.
Association for Computational Linguistics.
Armen Der Kiureghian and Ove Ditlevsen. 2009.
Aleatory or epistemic? does it matter? Structural Safety, 31(2):105–112. Risk Acceptance and Risk Communication.
Josef Klafka and Allyson Ettinger. 2020. Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4801–4811, Online. Association for Computational Linguistics.
Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020.
Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Advances in Neural Information Processing Systems*, volume 33, pages 9459–
9474. Curran Associates, Inc.
Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2022.
Probing via prompting. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1144–1157, Seattle,
United States. Association for Computational Linguistics.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
David Marecek, Jind ˇ ˇrich Libovický, Tomáš Musil, Rudolf Rosa, and Tomasz Limisiewicz. 2020. *Hidden in the Layers: Interpretation of Neural Networks* for Natural Language Processing, volume 20 of *Studies in Computational and Theoretical Linguistics*. Institute of Formal and Applied Linguistics, Prague, Czechia.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics.
Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. 2021. DART: Opendomain structured data record to text generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 432–447, Online. Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. *ArXiv*, abs/1808.08745.
Andrew Ng and Isa Fulford. 2023. Guidelines for prompting.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
Training language models to follow instructions with human feedback.
Mark Palatucci, Dean Pomerleau, Geoffrey Hinton, and Tom M. Mitchell. 2009. Zero-shot learning with semantic output codes. In *Proceedings of the 22nd* International Conference on Neural Information Processing Systems, NIPS'09, page 1410–1418, Red Hook, NY, USA. Curran Associates Inc.
Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Third Workshop on Very Large Corpora.
Ananya B. Sai, Akash Kumar Mohankumar, and Mitesh M. Khapra. 2022. A survey of evaluation metrics used for nlg systems. *ACM Comput. Surv.*,
55(2).
Victor Sanh, Albert Webson, Colin Raffel, Stephen H.
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization.
Teven Le Scao and Angela Fan et al. 2023. Bloom: A
176b-parameter open-access multilingual language model.
Patricia Schmidtova. 2022. Theatre play generation.
Master's thesis, Charles University.
Sebastian Schuster and Tal Linzen. 2022. When a sentence does not introduce a discourse entity, transformer-based models still sometimes refer to it.
In *Proceedings of the 2022 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 969–982, Seattle, United States. Association for Computational Linguistics.
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020.
BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics.
Lucia Specia, Kashif Shah, Jose G.C. de Souza, and Trevor Cohn. 2013. QuEst - a translation quality estimation framework. In *Proceedings of the 51st* Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79–84, Sofia, Bulgaria. Association for Computational Linguistics.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/
stanford_alpaca.
Craig Thomson and Ehud Reiter. 2021. Generation challenges: Results of the accuracy evaluation shared task.
In Proceedings of the 14th International Conference on Natural Language Generation, pages 240–248, Aberdeen, Scotland, UK. Association for Computational Linguistics.
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018.
FEVER: a large-scale dataset for fact extraction and VERification. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana.
Association for Computational Linguistics.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models.
Jannis Vamvas and Rico Sennrich. 2022. As little as possible, as much as necessary: Detecting over- and undertranslations with contrastive conditioning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 490–500, Dublin, Ireland. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6000–6010, Red Hook, NY,
USA. Curran Associates Inc.
Jesse Vig. 2019. Visualizing attention in transformerbased language representation models.
Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy.
Association for Computational Linguistics.
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, A. Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, M. Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddharth Deepak Mishra, Sujan C. Reddy, Sumanta Patro, Tanay Dixit, Xu dong Shen, Chitta Baral, Yejin Choi, Hannaneh Hajishirzi, Noah A.
Smith, and Daniel Khashabi. 2022. Benchmarking generalization via in-context instructions on 1,600+
language tasks.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models.
Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 11–20, Hong Kong, China. Association for Computational Linguistics.
Yanan Wu, Zhiyuan Zeng, Keqing He, Yutao Mou, Pei Wang, and Weiran Xu. 2022. Distribution calibration for out-of-domain detection with Bayesian approximation. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 608–615, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. |
raiyan-etal-2023-math | Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | https://aclanthology.org/2023.acl-srw.49 | The art of mathematical reasoning stands as a fundamental pillar of intellectual progress and is a central catalyst in cultivating human ingenuity. Researchers have recently published a plethora of works centered around the task of solving Math Word Problems (MWP) {---} a crucial stride towards general AI. These existing models are susceptible to dependency on shallow heuristics and spurious correlations to derive the solution expressions. In order to ameliorate this issue, in this paper, we propose a framework for MWP solvers based on the generation of linguistic variants of the problem text. The approach involves solving each of the variant problems and electing the predicted expression with the majority of the votes. We use DeBERTa (Decoding-enhanced BERT with disentangled attention) as the encoder to leverage its rich textual representations and enhanced mask decoder to construct the solution expressions. Furthermore, we introduce a challenging dataset, ParaMAWPS, consisting of paraphrased, adversarial, and inverse variants of selectively sampled MWPs from the benchmark Mawps dataset. We extensively experiment on this dataset along with other benchmark datasets using some baseline MWP solver models. We show that training on linguistic variants of problem statements and voting on candidate predictions improve the mathematical reasoning and robustness of the model. We make our code and data publicly available. | # Math Word Problem Solving By Generating Linguistic Variants Of Problem Statements
Syed Rifat Raiyan, Md. Nafis Faiyaz, Shah Md. Jawad Kabir, Mohsinul Kabir, Hasan Mahmud, Md. Kamrul Hasan Systems and Software Lab (SSL)
Department of Computer Science and Engineering Islamic University of Technology, Dhaka, Bangladesh
{rifatraiyan, nafisfaiyaz, jawadkabir, hasan, hasank}@iut-dhaka.edu
## Abstract
The art of mathematical reasoning stands as a fundamental pillar of intellectual progress and is a central catalyst in cultivating human ingenuity. Researchers have recently published a plethora of works centered around the task of solving Math Word Problems (MWP) - a crucial stride towards general AI. These existing models are susceptible to dependency on shallow heuristics and spurious correlations to derive the solution expressions. In order to ameliorate this issue, in this paper, we propose a framework for MWP solvers based on the generation of linguistic variants of the problem text. The approach involves solving each of the variant problems and electing the predicted expression with the majority of the votes. We use DeBERTa (Decoding-enhanced BERT with disentangled attention) as the encoder to leverage its rich textual representations and enhanced mask decoder to construct the solution expressions. Furthermore, we introduce a challenging dataset, PARAMAWPS,
consisting of paraphrased, adversarial, and inverse variants of selectively sampled MWPs from the benchmark MAWPS dataset. We extensively experiment on this dataset along with other benchmark datasets using some baseline MWP solver models. We show that training on linguistic variants of problem statements and voting on candidate predictions improve the mathematical reasoning and robustness of the model. We make our code and data publicly available.
## 1 Introduction
Math word problem solving is a long-standing research problem in Artificial General Intelligence
(AGI) and a lot of studies about this topic, from both industry and academia, have been published recently. A typical Math Word Problem (MWP)
takes the form of a written narrative that articulates a problem scenario and poses a question regarding one or more unknown quantities. A language model capable of solving such problems has Problem: 69 handbags are sold for $13 each. There are a total of 420 handbags in a boutique and the remaining handbags are sold for $7 each. How much did the boutique earn after selling all the handbags?
Expression: x = 69 × 13 + (420 − 69) × 7 Solution: 3354 Table 1: An example of a Math Word Problem.
to translate the human-readable problem statement to a valid mathematical expression that can be evaluated to obtain the numeric answer. An example of a classic MWP is portrayed in Table 1, where the reader is asked to infer the revenue of a boutique shop. Such problems are generally found in math textbooks of 1 st to 8 th grade students and are easily solvable by humans with decent mathematical aptitude.
A lot of challenges manifest while designing an automated system for solving these problems
(Zhang et al., 2019; Sundaram et al., 2022). The primary challenge is to understand the quantities in the problem and capture their complex mathematical interconnections from a linear textual sequence written in natural language. There exists a diverse range of MWPs with differing difficulty levels, *i.e.*, varying numbers of unknown values, and depth of the relationships between quantities, which require good mathematical reasoning ability to solve. Furthermore, the absence of crucial information and the presence of irrelevant information in the problem statements proves to be quite a challenge for the solver models (Patel et al., 2021). Other challenges include learning to tackle the chronological and temporal ambiguities of the events happening in the problem statements and dealing with MWPs that significantly differ from the training set in terms of semantic and syntactic structure.
To address the problem outlined in Table 1, a competent MWP solver model would need to possess the ability to associate the quantity, *i.e.*, 69 handbags, with its price attribute of $13, and understand the relative arithmetic order by deriving 351 remaining handbags, *i.e.*, 420 − 69, before associating the price attribute of $7. A lot of psychological studies have been done on how human beings learn to solve mathematical problems and improve their aptitude (Piaget, 2013; Peterson et al.,
2003; Kingsdorf and Krawec, 2016). The frontier of research involving MWP solving is considered a momentous step towards the apogee of AGI
(Bubeck et al., 2023) and so researchers have dedicated their efforts to replicating these complex cognitive patterns exhibited by human beings within the frameworks of AI models. The existing methods that are considered strong baselines for MWP
solving can be demonstrably shown to use shallow heuristics to solve many of the MWPs in the benchmark datasets (Patel et al., 2021) creating a faux impression of their mathematical reasoning capability. To account for this limitation, in this paper —
- We propose a framework for solving simple math word problems by generating paraphrased linguistic variants of the input problem statement using OpenAI's latest Generative Pre-trained Transformer (GPT-3) (Brown et al., 2020) models, namely *text-davinci003* and *gpt-3.5-turbo*. The problem statement variants along with the original problem text then undergo the appropriate preprocessing steps and are fed to an MWP
solver model with a DeBERTa-based encoder and Enhanced Mask decoder.
- We also generate a large, augmented version of the MAWPS (Koncel-Kedziorski et al., 2016) dataset, namely PARAMAWPS
(**Para**phrased MAth Word Problem Solving Repository), as a challenging dataset by the introduction of paraphrased structural variations of almost all categories of problems, but emphasizing more on the categories that the strong baseline models find difficult to solve.
DeBERTa (Decoding-enhanced BERT with disentangled attention) (He et al., 2020) is currently one of the most popular language models due to its effectiveness in achieving state-of-the-art results on a variety of natural language processing tasks, including language translation, text classification, and question answering. In our work, we find that the DeBERTa model achieves value accuracies of 63.5% and 91.0%
on the SVAMP dataset (Patel et al., 2021) and the MAWPS dataset (Koncel-Kedziorski et al., 2016)
respectively. It falls behind the current SOTA
accuracy of ROBERTA-DEDUCTREASONER (Jie et al., 2022) by a slight margin of 1 ± 0.20%
on the MAWPS dataset, but exceeds its accuracy of 47.3 ± 0.20% on the SVAMP dataset.
Our code and data are publicly available at —
https://github.com/Starscream-11813/
Variational-Mathematical-Reasoning
## 2 Problem Formulation
A Math Word Problem S is a sequence of word tokens and numeric values, where the VS =
{v1*, . . . , v*m} denotes the word tokens in S and the set nS = {n1*, . . . , n*l} denotes the set of numeric quantities in S. The set of word tokens VS consists of entities such as names of people, objects, units, and rates while the set of quantities nS consists of the numerical amount relevant to those entities.
The goal of an MWP solver model is to map S
to a valid mathematical expression E, consisting of the quantities in (nS ∪ C), where C is a set of constants, and the fundamental mathematical operators O = {+, −, ×, ÷}, which can be evaluated to obtain the correct answer.
## 3 Literature Review 3.1 Math Word Problem Solving 3.1.1 Preliminary Works
The dawn of research on MWP solving was in the mid-1960s (Feigenbaum et al., 1963; Bobrow, 1964). *Rule-based methods* (Fletcher, 1985; Bakman, 2007; Yuhui et al., 2010) are chronologically some of the earliest approaches to solving MWPs. They use a set of manually hard-coded rules about the language they are analyzing to find out regularities in the data. *Statistical methods*
(Kushman et al., 2014; Hosseini et al., 2014; Roy et al., 2015; Zhou et al., 2015; Mitra and Baral, 2016; Liang et al., 2016a,b) use generic ML classifiers to extract the entities, quantities, and operators from the problem statement and infer the numeric answer with simple logic. *Tree-based methods*(Koncel-Kedziorski et al., 2015; Roy and Roth, 2016; Roy et al., 2016; Roy and Roth, 2017) utilize the inherent binary tree-like structure of expressions/equations. Other primitive categories of approaches that have now been rendered somewhat obsolete are *Parsing-based methods* (Shi et al.,
2015; Zou and Lu, 2019), *Similarity-based methods* (Huang et al., 2016), and *Template-based* methods (Kushman et al., 2014; Zhou et al., 2015; Roy et al., 2016; Upadhyay et al., 2016; Huang et al., 2017).
## 3.1.2 Deep Learning-Based Methods
Currently, the landscape of Deep learning models for the MWP solving task is primarily comprised of five distinct paradigms, SEQ2SEQbased, SEQ2TREE-based, GRAPH2TREE-based, complex relation extraction-based, and *Large Language Model (LLM) prompt-based* approaches, each of which has demonstrated remarkable levels of performance and efficacy. Wang et al. (2017)
were the pioneers of introducing deep learning to solve MWPs with their proposed SEQ2SEQ model.
To improve the SEQ2SEQ model, researchers resorted to alternative strategies, such as reinforcement learning techniques (Wang et al., 2018b; Huang et al., 2018), using dense problem representation (Mishra et al., 2018), adopting templatebased methodologies (Wang et al., 2019), and incorporating group attention mechanisms (Li et al.,
2019). Xie and Sun (2019) were the progenitors of the novel Goal-driven Tree-Structured (GTS) model, designed to generate expression trees using the tree-based decoder in order to imitate the goaldriven problem-solving approach of humans. The use of this tree decoder along with pre-trained language models, such as BERT (Devlin et al., 2018),
BART (Lewis et al., 2019), RoBERTa (Liu et al.,
2019b), as the encoder in some of the SEQ2TREE
approaches (Liu et al., 2019a; Shen and Jin, 2020; Wu et al., 2020; Lin et al., 2021; Shen et al., 2021; Liang et al., 2021; Liang et al.; Li et al.,
2021; Xiong et al., 2022) brought about substantial performance improvements over the previous SEQ2SEQ methods. Cao et al. (2021) devised a directed acyclic graph (SEQ2DAG) model of the equations for the purpose of extracting the expression. Zhang et al. (2020a) incorporated the idea of Knowledge Distillation (KD) (Hinton et al., 2015) in their proposed model where the teacher network is pre-trained to guide the learning behaviors of the *student networks*. Yu et al.
(2021) introduced 2 types of encoders in their model. Hong et al. (2021) modified the work of Xie and Sun (2019) by incorporating a symbolic reasoning based *Learning-by-fixing* (LBF) framework. Huang et al. (2021) attempted to emulate human-like analogical learning in their proposed memory-augmented model. GRAPH2TREE-based approaches (Zhang et al., 2020b; Li et al., 2020)
fused the merits of Graph-based Transformer (Yun et al., 2019; Cai and Lam, 2020) encoders with multiple Graph Convolutional Network (multiGCN) modules (Kipf and Welling, 2016), and treebased decoders to solve MWPs. Chatterjee et al.
(2021) introduced a weakly supervised approach for MWP solving. Li et al. (2021) introduced a contrastive learning approach with pattern divergence to solve MWPs. Jie et al. (2022) formulated the MWP solving task as a complex relation extraction problem and leveraged explainable deductive reasoning techniques to iteratively construct the target equations.
With the advent of LLMs, many innovative prompt-based methods (Shao et al., 2022; Li et al., 2022; Wang et al., 2022; Pi et al., 2022; Chen et al., 2022; Liang et al., 2023) of solving MWPs that capitalize on the models' exceptional few-shot learning capability came into the limelight and demonstrated good performance across numerous benchmark datasets. Cobbe et al. (2021) used verifiers with their GPT-3 (Brown et al., 2020) model.
Although LLMs excel at natural language understanding and have serendipitous emergent reasoning abilities (Yang et al., 2023), they are still lackluster in complex reasoning tasks (Huang and Chang, 2022). Numerous studies on complex reasoning tasks have empirically demonstrated that the approach of fine-tuning smaller models is more effective (Ho et al., 2022) than adopting LLM prompting techniques like Chain of Thought
(CoT) prompting (Wei et al., 2022).
## 3.2 Paraphrasing
Paraphrase generation has garnered significant attention from various NLP approaches, encompassing rule-based methods (McKeown, 1980; Meteer and Shaked, 1988), data-driven techniques (Madnani and Dorr, 2010), linguistic translation methods (Bannard and Callison-Burch, 2005; Barzilay and McKeown, 2001; Prakash et al., 2016) that leverage bilingual corpora for iterative refinement
(Madnani and Dorr, 2010; Prakash et al., 2016; Mallinson et al., 2017). Witteveen and Andrews
(2019) demonstrated the superiority of LLMs like GPT-3 over the preceding methods in the paraphrasing task.
Accordingly, our work attempts to leverage the strengths of GPT-3 to generate a more linguistically diverse pool of problem statements to finetune a relatively smaller DeBERTa solver model on the downstream task of MWP solving which falls under the rubric of complex reasoning tasks.
## 4 Methodology
Figure-1 in Appendix-A shows an overview of our proposed architecture. Given a problem statement S, we prompt the paraphraser model to generate k linguistic variants of S which are, S1, S2*, . . . , S*k.
These k variant problems along with the seed problem S consists of quantities that are tagged appropriately using quantity tags. Each of the k + 1 text sequences is then tokenized and the content embeddings H and positional embeddings P of the tokens are fed to the DeBERTa model. The disentangled self-attention mechanism of DeBERTa's encoder utilizes H and P to generate the output H*output*, which is a contextual representation of the content of each problem statement. H*output*,
along with the relative positional embeddings P
and absolute positional embeddings I of each of the problem statements are used by the Transformer layers of Enhanced Mask Decoder (EMD)
of DeBERTa to generate the k + 1 predicted equations E1, E2*, . . . , E*k+1. These equations are then simplified and the equation that is predicted the most number of times is elected as the final prediction of the model. This majority voting module is used only during the validation/testing phase and for inference. During the training phase, the k +
1 problem statements are deemed as stand-alone training samples and the Negative Log-Likelihood loss (NLLLoss) is calculated using the predicted equations and the ground-truth equation. Consequently, if the training set of the dataset used to train the model consists of n samples, it is as if the model is trained with (k + 1) × n = kn + n samples. The knowledge points gathered after being trained on an extra kn samples contributes to the robustness of the model.
## 4.1 Paraphrasing Model
The task of correctly reformulating a Math Word Problem statement requires a good level of language understanding which is not present in its entirety in rule-based and data-driven methods of paraphrasing rendering them unsuitable in this case. These methods frequently yield incorrect, incoherent, and grammatically inaccurate linguistic variations; sometimes even leaving out crucial numerical information. Accordingly, we choose *textdavinci-003* and *gpt-3.5-turbo*, two GPT-3 models from OpenAI, as the paraphrasing models. GPT3 (Generative Pre-trained Transformer 3) (Brown et al., 2020) is a large language model with 175 billion parameters, that is capable of performing a wide range of natural language processing tasks, including paraphrasing a given sentence. Upon being prompted, it restates a given problem statement in different words while still maintaining the original meaning. To select the most appropriate paraphrase, GPT-3 uses a scoring mechanism that evaluates the semantic similarity between the original sentence and each of the generated paraphrases. The model assigns a higher score to paraphrases that are more similar in meaning to the input sentence, based on its understanding of the context and the relationships between the words.
It also allows users to customize the level of complexity and the style of writing in the paraphrased version. We generate k variants of the original problem text by prompting the model.
## 4.1.1 Prompts And System Task Description
The prompts that we use for accomplishing our linguistic variant generation task are,
- system role **Task Description** —
You are a Math Word Problem rephraser that generates variations of math word problem statements.
- user role **Prompts** —
- Generate k1 paraphrased variations of the problem by changing the sentence structure.
- Generate k2 paraphrased variations of the problem by changing the named entities and objects.
- Generate k3 paraphrased variations of the problem with irrelevant numerical information.
Here, the total number of linguistic variants of a problem, k = k1 + k2 + k3 and 5 ≤ k ≤ 15.
A detailed discussion on the types of problem variations is delineated in Section-5.
## 4.2 Quantity Tagging
All the quantities (written either numerically or in words) in every single variant of the problem along with the original problem itself, are tagged with unique quantity tags using RegEx and a Python script which is provided in our GitHub repository
(see Section-1). This quantity tagging step ensures that the same quantity is present in both the input as well as in the output. The quantity-tagged tokens have their own content and positional embeddings. For example, if the problem statement is,
"Melanie picked 4 plums, Dan picked 9 plums, and Sally picked 3 plums from the plum tree. How many plums were picked in total?"
then the quantity-tagged version of the problem statement is,
"Melanie picked [Q1] *plums, Dan* picked [Q2] *plums, and Sally picked*
[Q3] *plums from the plum tree. How* many plums were picked in total?"
We use this quantity tagging for the ground truth equation's quantities as well.
## 4.3 Encoder
We use the pre-trained language model DeBERTa
(Decoding enhanced **BERT** with disentangled attention). DeBERTa is a newly developed neural language model by He et al. (2020) that is based on the Transformer architecture. It boasts a significant advancement over previous state-of-the-art
(SOTA) pre-trained language models (PLMs) due to the incorporation of two novel techniques. The first technique is a disentangled attention mechanism and the second technique is an enhanced mask decoder. Together, these techniques make DeBERTa a highly effective PLM that outperforms its predecessors on a wide range of NLP
downstream tasks.
## 4.3.1 Disentangled Attention
Contrary to BERT, which utilizes a vector representation for each word in the input layer by summing its content and position embeddings, in DeBERTa, every word is represented by two separate vectors that encode its content and position individually. The attention scores between words are computed using separate matrices that are disentangled based on the content and relative position of each word. This design choice is based on the observation that the attention weight between a pair of tokens is influenced by both their content and in tandem their relative positions. This especially holds paramount importance for the task of MWP solving as the relative positions of certain keywords in the problem statements dictate the solution.
To represent a token xilocated at a specific position i within a given sequence, it employs two distinct vectors, Hi and Pi|j, which are respectively the content and relative positional representation vectors of xi with respect to a token xj at position j. The inter-token attention weights between xi and xj can be broken down into four constituent components,
$$\begin{split}A_{ij}&=\langle H_{i},P_{i|j}\rangle\times\langle H_{j},P_{j|i}\rangle^{\top}\\ &=\underbrace{H_{i}H_{j}^{\top}}_{C2C}+\underbrace{H_{i}P_{j|i}^{\top}}_{C2P}+\underbrace{P_{i|j}H_{j}^{\top}}_{P2C}+\underbrace{P_{i|j}P_{j|i}^{\top}}_{P2P}\end{split}\tag{1}$$
where, the four disentangled matrix attention scores represent their contents and positions as content-to-content (C2C), *content-to-position*
(C2P), *position-to-content (P2C)*, and *position-toposition (P2P)*. The P2P portion of (1) is somewhat rendered obsolete since DeBERTa uses relative positional embedding which is why no useful information can be extracted from it.
The self-attention mechanism described by Vaswani et al. (2017) has 3 parameters, Q (Query),
K (Key), and V (Value). The non-contextual embedding that is being contextualized at any point requests for information from its surrounding tokens within the context window and that is represented by the query token, and the tokens that the model pays attention to are the key tokens.
$$\begin{array}{l}{{Q_{c}=HW_{c_{Q}},K_{c}=HW_{c_{K}},V_{c}=HW_{c_{V}}}}\\ {{Q_{r}=PW_{r_{Q}},K_{r}=PW_{r_{K}}}}\end{array}\tag{2}$$ where, $W_{c_{Q}}\in\mathbb{R}^{d\times d}$, $W_{c_{K}}\in\mathbb{R}^{d\times d}$, $W_{c_{V}}\in\mathbb{R}^{d\times d}$
R
d×dare the projection weight matrices for the projected content vectors Qc, Kc, Vc respectively.
Similarly, WrQ ∈ R
d×dand WrK ∈ R
d×d play the role of projection matrices for the projected relative position vectors Qr and Kr. The metric to calculate the relative distance between tokens xi and xj is,
$$\delta(i,j)={\begin{cases}0,&{\mathrm{if}}\;i-j\leq k\\ 2k-1,&{\mathrm{if}}\;i-j\geq k\\ i-j+k,&{\mathrm{otherwise}}\end{cases}}$$
$${\mathrm{(3)}}$$
which implies, δ(*i, j*) ∈ [0, 2k). Each element A¯ij of the attention matrix A¯ denotes the attention score from token xito the token xj and is computed using the vectors defined in (2) in the following manner,
$$\bar{A}_{i j}=\underbrace{Q_{i}^{c}K_{j}^{c\top}}_{C2C}+\underbrace{Q_{i}^{c}K_{\delta(i,j)}^{r\top}}_{C2P}+\underbrace{K_{j}^{c}Q_{\delta(j,i)}^{r\top}}_{P2C}\tag{4}$$
The attention score is yielded using the dotproduct of the query and key in the formula to let the model have an idea of how similar the key is to the query. The output of the self-attention mechanism, which is denoted by H*output* ∈ R
N×dis,
$$H_{output}=\mathbf{softmax}\left({\frac{{\bar{A}}}{{\sqrt{3d}}}}\right)V_{c}\qquad\qquad(5)$$
The result of the dot-product is normalized by dividing with √3d to avoid very hard softmax with small gradients, which is especially required for training stability in the case of large-scale PLMs
(Vaswani et al., 2017; He et al., 2020).
## 4.4 Decoder
He et al. (2020) postulates that the premature integration of absolute positions, which is employed by BERT (Devlin et al., 2018) in its decoding phase, could potentially impede the model's ability to acquire adequate knowledge of relative positions. With this as the justification, DeBERTa, being a model that was pre-trained using MLM
(Masked Language Modeling), uses the absolute positions of the tokens in the penultimate layer, right before the softmax layer during the masked token prediction in its decoding phase. This enables all the Transformer layers in the decoder to work with the relative positional information without the susceptibility of hampering the learning process of the model. Since the absolute positions of the tokens in a sentence highly influence the nuanced understanding of the sentence's semantic and syntactic structure, and extracting information from only the relative positions isn't sufficient, the absolute positions are incorporated in the tail-end of the pipeline in the case of DeBERTa. This is why DeBERTa's decoding module is dubbed an Enhanced Mask Decoder (EMD) and it demonstrably outperforms the decoder counterparts of its predecessor PLMs (He et al., 2020).
## 4.5 Majority Voting
Since there can be multiple valid equations for a single MWP, each of the k + 1 predictions from the decoder, E1, E2 *. . . , E*k+1, is simplified to a reduced normal form using the python package sympy1. These k + 1 simplified predictions, E′1
, E′2
. . . , E′k+1, are then counted and the prediction that is the most frequent or that is yielded the most number of times is elected as the final answer of the whole solver model. It is to be noted that this voting mechanism is used only during the 1https://www.sympy.org/en/index.html testing/validation phases or during inference.
E
∗ ← argmax
E′
$\eqref{eq:walpha}$.
**utes(E):** $\bf i=1,2,\ldots,k+1$
## 5 Experiment 5.1 Data Acquisition
We introduce a new large-scale dataset, namely PARAMAWPS (**Para**phrased MAth Word Problem Solving Repository), consisting of 16,278 single equation MWPs. It is generated as a by-product of using one of the most commonly-used English MWP datasets, MAWPS
(Koncel-Kedziorski et al., 2016) which consists of a total of 2,373 problems, and the paraphraser model. We save the generated paraphrased variants of selectively sampled problems of MAWPS
and also manually include inverse versions of the problems to create our dataset. The dataset contains all the problems from the original MAWPS
dataset as well as paraphrased versions of some of the more challenging problems within MAWPS, hence the name, PARAMAWPS. The samples are manually checked for correctness by 3 undergraduate students. By generating variations of some of the more difficult problems, we intend to increase familiarity of challenging concepts found within those problems to any model trained over this data, as well as more thoroughly challenge existing models trained on datasets that do not provide said complexity at an equal or higher density. We generate k problems from each seed problem in the dataset, adding up to a total of k + 1 problems, where 5 ≤ k ≤ 16. Each of the k generated problems will be a variation on the original that will feature several changes to the problem text. We generate 4 types of variations of each seed problem (see Table-7 in Appendix-A).
- **Changed phrase order** - Variations with the order of the phrases being changed facilitate a break from the standard problem statement template where quantities are generally given before the question formulation. Having a changed ordering of phrases makes apriori question formulations more common.
- **Changed object and entity names** - Object and entity names are altered with interchangeable alternatives (names, synonyms)
in problem variations to prevent fixation on elements of the problem mostly agnostic to
the process of solving the problem. It also serves to prevent an increase in density for similar terms that originate from the seed problem yielding good problem samples for language models (Lee et al., 2021).
- **Added unrelated information** - Some variations contain an extra phrase or quantity, or similar additions that are in excess of the information required to solve a problem and do not affect the original problem formulation in any meaningful way. These adversarial variations serve to obfuscate and familiarize the models with only the necessary information, enhancing deductive abilities (Kumar et al.,
2021).
- **Inverted question** - Some variations will take a previously known quantity and turn it into an unknown quantity while revealing the previous unknown quantity of the problem. This, in many cases, alters the question drastically, changing the needed calculations and equations, while keeping a roughly similar question body to the seed problem. Liu et al. (2021) used such problem samples in their work.
## 5.1.1 Seed Problems
Many of the seed problems used to generate variations from MAWPS pose sufficient difficulty to even SOTA MWP solvers and often contain numeric information embedded within the statement itself. An example is the following problem,
"Mary, Sam, Keith, and Alyssa each have 6 marbles. How many marbles do they have in all?"
This problem yields the equation "x = 4 × 6", despite the quantity 4 not being mentioned anywhere in the statement. This quantity had to be inferred from the other parts of the statement itself, namely, the 4 entities referred to in the statement; Mary, Sam, Keith, and Alyssa. Another such problem is,
"*When the price of diesel rose by 10%,*
a user reduced his diesel consumption by the same amount. How much would his diesel bill change in terms of percentage?"
which yields the complex equation of "x = (1.0−
((1.0 + (10.0×0.01))×(1.0−(10.0×0.01))))×
100.0". This problem, although seemingly simple on the surface in terms of quantities described, has several calculations dictated through the problem statement, some of which require additional realworld anecdotal knowledge, such as the conversion of percentages. Another problem with similar inferences of a more complex nature is,
"*Lauren wants to mix 5 liters of 7% milk* with skim-milk (0% fat) to produce a mixture of 2.9787% milk. How much skim-milk should Lauren add?"
yielding the equation "x = (7.0 × 0.01) ×
5.0/(2.9787 × 0.01) − 5.0", containing similar conversions of percentages, as well as additional knowledge of types of mixtures. Here, 7% milk is mixed with pure milk, or 100% milk. Yet the only indication that the milk is of 100% purity is nowhere to be seen in a direct capacity in the problem, but rather in a roundabout way - by referring to the amount of fat (0%) rather than the purity of the milk. Models have to infer a vast amount of real-world contextual knowledge to be able to solve such problems. Problems with seconddegree unknown quantities are also present as seed problems. For example, the problem
"*The Hudson River flows at a rate of 3* miles per hour. A patrol boat travels 60 miles upriver and returns in a total time of 9 hours. What is the speed of the boat in still water?"
that yields the equation "(60.0/(x − 3.0)) +
(60.0/(3.0+x)) = 9.0", which is a quadratic equation. The problem itself deals with calculations of speed, which requires knowledge of how speed is calculated given certain quantities, as well as the effect of certain elements in the problem scenario on speed.
We resort to this data generation approach due to the lack of large-scale, diverse, single-equation English MWP datasets. Other commonly-used benchmark datasets, MATH23K (Wang et al., 2017) and APE210K (Liang et al., 2021) consist of math problems written in Chinese Mandarin.
We also aim to diversify the samples in MAWPS
to enable better training for MWP solvers (Schick and Schütze, 2021; Kumar et al., 2022). SVAMP,
created by Patel et al. (2021) consists of challenging versions of problems and is considered a challenge set for testing the robustness of MWP solvers. We use the original version of MAWPS
and SVAMP along with our dataset PARAMAWPS
for conducting our experiments. A comparative summary of the statistics of the datasets used is shown in Table-2 and their operator count distributions are portrayed in Figure-2.
| Properties | SVAMP | MAWPS | PARAMAWPS |
|--------------------------------|---------|---------|-------------|
| # of problems | 1,000 | 2,373 | 16,278 |
| # of unique templates | 27 | 159 | 215 |
| Avg. # of operators | 1.236 | 1.606 | 1.68 |
| Avg. # of quantities per prob. | 2.81 | 2.57 | 2.54 |
| Avg. # of quantities per equ. | 2.23 | 2.59 | 2.67 |
| # of problems with constants | 0 | 185 | 3313 |
## 5.2 Model Implementation Details And Training 5.2.1 Baseline Models
We implement the DeBERTa model using Microsoft's *deberta-base* that is publicly available in Hugging Face2. The other baseline MWP solver models are implementations already available in the open-source MWPToolkit3 developed by Lan et al. (2022). We use an extensive set of baseline models, Transformer (Vaswani et al., 2017),
DNS (Wang et al., 2017), MathEN (Wang et al.,
2018a), GroupATT (Li et al., 2019), RNNEncDec
(Sutskever et al., 2014), RNNVAE (Su et al.,
2018), BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019b), and compare them with the performance of the DeBERTa model. See Appendix-A
for more training process details.
## 5.3 Result Analysis
Methods MAWPS†
(%)
SVAMP
(%)
PARAMAWPS†
(%)
DNS 59.5 22.1 71.2
Math-EN 69.2 21.8 71.6
GROUP-ATT 76.1 19.2 70.8 RNNEncDec 79.4 25.4 73.6
RNNVAE 79.8 25.9 72.8
Transformer 85.6 20.7 64.6
BERT 86.9 24.8 72.1
RoBERTa 88.4 30.3 72.5
DeBERTa 90.7 **63.5** 74.1
DeBERTaPM + VM **91.0** - -
DeBERTaVM - - **79.1**
Table 3: Value accuracy of the DeBERTa model and
various baseline models. † denotes 5-fold cross validation. PM stands for Paraphrasing Model and VM
stands for Voting Mechanism.
Table-3 shows the performance comparison of
the DeBERTa model and the baseline models mentioned in Section-5.2.1. The DeBERTa model coupled with the Paraphrasing model and the Voting
Mechanism outperforms all the baseline models in the MAWPS (Koncel-Kedziorski et al., 2016)
dataset with an accuracy of 91.0%. The Paraphrasing Model and the Voting Mechanism contributed to a 0.3% increase in accuracy. The vanilla DeBERTa model also outperforms the baseline models in our PARAMAWPS dataset by boasting an accuracy of 74.1%. With the voting mechanism at the tail-end of the pipeline, we are able to yield an improvement of the accuracy by 5.04% making the accuracy 79.1%. We test the robustness of the vanilla DeBERTa model on the SVAMP
(Patel et al., 2021) challenge dataset and get an accuracy of 63.5% which is quite higher than that of the other baseline models. The model still lags a mere 1 ± 0.20% behind the current SOTA model on MAWPS, which is the ROBERTADEDUCTREASONER model by Jie et al. (2022)
(92.0 ± 0.20%) but supersedes its accuracy of 47.3 ± 0.20% on the SVAMP dataset.
The superiority of the model's accuracy in PARAMAWPS over SVAMP, despite the demonstrably greater difficulty of the MWP samples in PARAMAWPS, indicates that training a language model on a more diverse set of linguistically varied problem statements leads to a better quality mathematical reasoning ability after the training phase.
## 5.4 Ablation Study
To gain insights into the individual contributions of the Paraphrasing Model and Voting Mechanism in conjunction with the DeBERTa model, we perform ablation studies. Table-4 shows the effect of
Dice-4 shows the $\cfrac{\text{MAWPS}^{\dagger}(\%)}{90.7}\\ \cfrac{90.4}{90.8}$
$\blacksquare$
![7_image_1.png](7_image_1.png)
\# of variants MAWPS†(%)
![7_image_0.png](7_image_0.png)
Table 4: Value accuracy with different numbers of linguistic variants of the problem samples. † denotes 5fold cross validation.
\begin{tabular}{|l|l|l|} \hline \multicolumn{2}{|c|}{**Voting Mechanism} & Paranalawps${}^{\dagger}$ (%)** \\ \hline w/o VM & 72.9, 74.1, 76.5, 72.1, 74.6 \\ \hline w/ VM & 78.5, 77.8, 82.4, 77.2, 79.5 \\ \hline \end{tabular}
Table 5: Effect of Majority Voting on Value accuracy
across all 5 folds. † denotes 5-fold cross validation.
increasing the number of generated problem variants to infer the solution expressions of the problem samples in the MAWPS dataset's test set. Although there is a slight decrease in the accuracy for k = 5, we see a minuscule increase in accuracy for k = 10 and k = 15. In Table-5 we see the impact of the Voting Mechanism which contributed to a 5.4% increase on average in the accuracy of the DeBERTa model on the PARAMAWPS dataset.
## 5.5 Mwp Task Performance Analysis Of Large Language Models
To test out the assertion made in other studies
(Huang and Chang, 2022; Ho et al., 2022) about the incompetence of LLMs in complex reasoning tasks compared to fine-tuned smaller models, we use the GPT-J model and some of the presently used GPT-3 models by OpenAI to perform the task of MWP solving. We use the original version of MAWPS (Koncel-Kedziorski et al., 2016)
along with our dataset PARAMAWPS for testing the mathematical reasoning of these models.
Models MAWPS†
(%)
PARAMAWPS†
(%)
GPT-J (6B) 9.9 5.9 text-babbage-001 (6.7B) 2.76 3.21 text-curie-001 (13B) 4.09 4.20 gpt-3.5-turbo (175B) 80.3 73.0 Table 6: Value accuracy of the LLMs in a zero-shot setup testing. † denotes evaluation on the whole dataset.
One of the most capable models in the GPT-3.5 series of models is *text-davinci-003*, with 175 billion parameters and the ability to follow instructions consistently and produce lengthy outputs. However, the most capable and up-to-date model according to OpenAI is *gpt-3.5-turbo*, with 175 billion parameters, which is primarily optimized for chat completions but can be tweaked to follow more specific instructions similar to *text-davinci003*. While all models used are instructed to output in a specific format - 'Answer: [ANS]' with just the numerical value in the place of '[ANS]',
the ability to do so consistently deteriorated with the models with relatively fewer parameters. Out of the base GPT-3 models, the 13 billion parameters *text-curie-001* can output in the given format relatively consistently, *text-babbage-001* with 6.7 billion parameters can occasionally produce the output in the correct format, but tries to generate full sentences more often than not, whereas the 350 million parameters *text-ada-001* can barely generate a single output in the correct format, choosing to generate full sentences almost all of the time. Models tend to try to *'work through'* the problem in text form rather than just generating the output, although with *gpt-3.5-turbo* this can be mostly mitigated by using very specific instructions for the prompt. The results in Table-6 and Table-3 support the current weakness of LLMs in mathematical reasoning tasks and the suitability of fine-tuning smaller models. It indicates the improvement in performance for a well-reasoning, but comparatively small model when it has the option to democratically choose from a substantial number of solution guesses.
## 6 Conclusion And Future Work
In this paper, we propose the idea of an MWP solving framework that utilizes the paraphrased linguistic variations of problem texts to train a DeBERTa model that generates candidate solution expressions and finalizes the predicted math expression by employing majority voting on a set of simplified candidate expressions. Our findings demonstrate that incorporating linguistic variants of problem statements during training and utilizing a voting mechanism for candidate predictions enhance the model's mathematical reasoning and overall robustness. We also introduce a large-scale, diverse, and challenging singleequation MWP dataset, PARAMAWPS, consisting of paraphrased, inverse, and adversarial variants of selectively sampled datapoints from MAWPS, as a formidable evaluation test-bed and a proper benchmark for training MWP solver models. We wish to experiment further with harder problem text variations (*e.g.* grammatical errors) and conduct a thorough error analysis of the models for identifying their lapses in mathematical reasoning and discovering more scopes of improvement. We also aim to expand our research to encompass the intricate realms of multi-equation, multi-step deduction, and domain-knowledge problems. We hope our approach and findings will pave the way to more scholarly works on the vistas of AGI and in tandem be deemed a noteworthy and meaningful contribution to this domain of research.
## 7 Limitations
There are still some avenues of improvement in our work. The temporal overhead due to the problem variant generation by the paraphraser model may make our proposed architecture unsuitable for real-world applications even though it takes merely 10 to 12 seconds to generate k = 5 variants for a single sample. Another limitation of our work is the absence of a proper tie-breaking strategy in our Majority Voting module. Furthermore, we need to introduce a system of weighted votes (*e.g.* semantic similarity scores as weights)
so that the votes of wrongly predicted equations don't trump that of correctly generated predictions.
We also plan to incorporate and experiment with the Tree-based decoder (Xie and Sun, 2019) in our proposed pipeline.
## References
Yefim Bakman. 2007. Robust understanding of word problems with extraneous information. arXiv preprint math/0701393.
Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd annual meeting of the Association for Computational Linguistics (ACL05), pages 597–
604.
Regina Barzilay and Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In *Proceedings of the 39th annual meeting of the Association for Computational Linguistics*, pages 50–57.
Daniel G Bobrow. 1964. Natural language input for a computer problem solving system.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. *arXiv preprint* arXiv:2303.12712.
Deng Cai and Wai Lam. 2020. Graph transformer for graph-to-sequence learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7464–7471.
Yixuan Cao, Feng Hong, Hongwei Li, and Ping Luo. 2021. A bottom-up dag structure extraction model for math word problems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 39–46.
Oishik Chatterjee, Aashish Waikar, Vishwajeet Kumar, Ganesh Ramakrishnan, and Kavi Arya. 2021. A
weakly supervised model for solving math word problems. *arXiv preprint arXiv:2104.06722*.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. *arXiv preprint* arXiv:2211.12588.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805.
Edward A Feigenbaum, Julian Feldman, et al. 1963.
Computers and thought. New York McGraw-Hill.
Charles R Fletcher. 1985. Understanding and solving arithmetic word problems: A computer simulation.
Behavior Research Methods, Instruments, & Computers, 17(5):565–571.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. *arXiv preprint* arXiv:2006.03654.
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015.
Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7).
Namgyu Ho, Laura Schmid, and Se-Young Yun.
2022. Large language models are reasoning teachers. *arXiv preprint arXiv:2212.10071*.
Yining Hong, Qing Li, Daniel Ciao, Siyuan Huang, and Song-Chun Zhu. 2021. Learning by fixing: Solving math word problems with weak supervision. In AAAI Conference on Artificial Intelligence.
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In *EMNLP*, volume 523533. Citeseer.
Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin.
2018. Neural math word problem solver with reinforcement learning. In *Proceedings of the 27th International Conference on Computational Linguistics*,
pages 213–223.
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian Yin. 2017. Learning fine-grained expressions to solve math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 805–814.
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 887–896.
Jie Huang and Kevin Chen-Chuan Chang. 2022. Towards reasoning in large language models: A survey.
arXiv preprint arXiv:2212.10403.
Shifeng Huang, Jiawei Wang, Jiao Xu, Da Cao, and Ming Yang. 2021. Recall and learn: A memoryaugmented solver for math word problems. arXiv preprint arXiv:2109.13112.
Zhanming Jie, Jierui Li, and Wei Lu. 2022. Learning to reason deductively: Math word problem solving as complex relation extraction. *arXiv preprint* arXiv:2203.10316.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Sheri Kingsdorf and Jennifer Krawec. 2016. A broad look at the literature on math word problem-solving interventions for third graders. *Cogent Education*,
3(1):1135770.
Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. *arXiv preprint arXiv:1609.02907*.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. *Transactions of the Association for Computational Linguistics*, 3:585–597.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
A math word problem repository. In *Proceedings of* the 2016 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1152–1157.
Vivek Kumar, Rishabh Maheshwary, and Vikram Pudi. 2021. Adversarial examples for evaluating math word problem solvers. arXiv preprint arXiv:2109.05925.
Vivek Kumar, Rishabh Maheshwary, and Vikram Pudi.
2022. Practice makes a solver perfect: Data augmentation for math word problem solvers. *arXiv* preprint arXiv:2205.00177.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In *Proceedings of the* 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 271–281.
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan, Bing Tian Dai, Yan Wang, Dongxiang Zhang, and Ee-Peng Lim. 2022. Mwptoolkit: An open-source framework for deep learning-based math word problem solvers. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 36, pages 13188–
13190.
Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data makes language models better. *arXiv preprint* arXiv:2107.06499.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. *arXiv preprint arXiv:1910.13461*.
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian Dai, and Dongxiang Zhang. 2019. Modeling intra-relation in math word problems with different functional multi-head attentions. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6162–6167.
Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu, Fengyuan Xu, and Sheng Zhong. 2020. Graph-totree neural networks for learning structured inputoutput translation with applications to semantic parsing and math word problem. arXiv preprint arXiv:2004.13781.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2022. On the advance of making language models better reasoners. *arXiv preprint arXiv:2206.02336*.
Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou, Chao Li, Hongzhi Liu, and Yunbo Cao. 2021. Seeking patterns, not just memorizing procedures: Contrastive learning for solving math word problems.
arXiv preprint arXiv:2110.08464.
Chao-Chun Liang, Kuang-Yi Hsu, Chien-Tsung Huang, Chung-Min Li, Shen-Yu Miao, and Keh-Yih Su. 2016a. A tag-based english math word problem solver with understanding, reasoning and explanation. In *Proceedings of the 2016 Conference of* the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 67–71.
Chao-Chun Liang, Shih-Hong Tsai, Ting-Yun Chang, Yi-Chung Lin, and Keh-Yih Su. 2016b. A meaningbased English math word problem solver with understanding, reasoning and explanation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations, pages 151–155, Osaka, Japan. The COLING
2016 Organizing Committee.
Zhenwen Liang, Wenhao Yu, Tanmay Rajpurohit, Peter Clark, Xiangliang Zhang, and Ashwin Kaylan.
2023. Let gpt be a math tutor: Teaching math word problem solvers with customized exercise generation. *arXiv preprint arXiv:2305.14386*.
Zhenwen Liang, Jipeng Zhang, Jie Shao, and Xiangliang Zhang. 2021. Mwp-bert: A strong baseline for math word problems. *arXiv preprint* arXiv:2107.13435.
Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin, Jie Shao, and Xiangliang Zhang. Mwp-bert: A
numeracy-augmented pre-trained encoder for math word problems.
Xin Lin, Zhenya Huang, Hongke Zhao, Enhong Chen, Qi Liu, Hao Wang, and Shijin Wang. 2021. Hms:
A hierarchical solver with dependency-enhanced understanding for math word problem. In Thirty-Fifth AAAI Conference on Artificial 2021, pages 4232–
4240.
Qianying Liu, Wenyu Guan, Sujian Li, Fei Cheng, Daisuke Kawahara, and Sadao Kurohashi. 2021. Roda: reverse operation based data augmentation for solving math word problems. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*,
30:1–11.
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke Kawahara. 2019a. Tree-structured decoding for solving math word problems. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing
(EMNLP-IJCNLP), pages 2370–2379.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Nitin Madnani and Bonnie J Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven methods. *Computational Linguistics*,
36(3):341–387.
Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881–893, Valencia, Spain. Association for Computational Linguistics.
Kathleen R McKeown. 1980. Paraphrasing using given and new information in a question-answer system.
Technical Reports (CIS), page 723.
Marie Meteer and Varda Shaked. 1988. Strategies for effective paraphrasing. In Coling Budapest 1988 Volume 2: International Conference on Computational Linguistics.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2021. A diverse corpus for evaluating and developing english math word problem solvers. arXiv preprint arXiv:2106.15772.
Pruthwik Mishra, Litton J Kurisinkel, Dipti Misra Sharma, and Vasudeva Varma. 2018. Equgener: A
reasoning network for word problem solving by generating arithmetic equations. In *Proceedings of the* 32nd Pacific Asia Conference on Language, Information and Computation.
Arindam Mitra and Chitta Baral. 2016. Learning to use formulas to solve simple arithmetic problems. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 2144–2153.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are nlp models really able to solve simple math word problems? *arXiv preprint* arXiv:2103.07191.
Jordan Peterson, Robert Pihl, Daniel Higgins, Jean Séguin, and Richard Tremblay. 2003. Neuropsychological performance, iq, personality, and grades in a longitudinal grade-school male sample. *Individual* Differences Research, 1:159–172.
Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Yan Gao, Qiang Fu, Jian-Guang Lou, and Weizhu Chen. 2022. Reasoning like program executors. *arXiv preprint arXiv:2201.11473*.
Jean Piaget. 2013. *Child's Conception of Number: Selected Works vol 2*. Routledge.
Aaditya Prakash, Sadid A Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual lstm networks. *arXiv preprint* arXiv:1610.03098.
Subhro Roy and Dan Roth. 2016. Solving general arithmetic word problems. arXiv preprint arXiv:1608.01413.
Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31.
Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016.
Equation parsing: Mapping sentences to grounded equations. *arXiv preprint arXiv:1609.08824*.
Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. *Transactions of the Association for Computational Linguistics*, 3:1–13.
Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. *arXiv* preprint arXiv:2104.07540.
Zhihong Shao, Fei Huang, and Minlie Huang. 2022.
Chaining simultaneous thoughts for numerical reasoning. *arXiv preprint arXiv:2211.16482*.
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. 2021. Generate &
rank: A multi-task framework for math word problems. *arXiv preprint arXiv:2109.03034*.
Yibin Shen and Cheqing Jin. 2020. Solving math word problems with multi-encoders and multi-decoders.
In *Proceedings of the 28th International Conference* on Computational Linguistics, pages 2924–2934.
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1132–1142.
Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. 2018. Variational recurrent neural machine translation. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
Sowmya S Sundaram, Sairam Gurajada, Marco Fisichella, Savitha Sam Abraham, et al. 2022. Why are nlp models fumbling at elementary math? a survey of deep learning based word problem solvers.
arXiv preprint arXiv:2205.15683.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.
Sequence to sequence learning with neural networks.
Advances in neural information processing systems, 27.
Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang, and Wen-tau Yih. 2016. Learning from explicit and implicit supervision jointly for algebra word problems. In *Proceedings of the 2016 Conference on* Empirical Methods in Natural Language Processing, pages 297–306.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing systems*, 30.
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang, and Xiaojiang Liu. 2018a. Translating a math word problem to an expression tree. arXiv preprint arXiv:1811.05632.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018b. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 32.
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu, Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
2019. Template-based math word problem solvers with recursive neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7144–7151.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. *arXiv preprint arXiv:2203.11171*.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 845–
854.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
Sam Witteveen and Martin Andrews. 2019. Paraphrasing with large language models. arXiv preprint arXiv:1911.09661.
Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuan-Jing Huang. 2020. A knowledge-aware sequence-to-tree network for math word problem solving. In *Proceedings of the 2020 Conference on Empirical Methods*
in Natural Language Processing (EMNLP), pages 7137–7146.
Zhipeng Xie and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems. In *IJCAI*, pages 5299–5305.
Jing Xiong, Zhongwei Wan, Xiping Hu, Min Yang, and Chengming Li. 2022. Self-consistent reasoning for solving math word problems. *arXiv preprint* arXiv:2210.15373.
Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, and Xia Hu. 2023. Harnessing the power of llms in practice: A survey on chatgpt and beyond. *arXiv* preprint arXiv:2304.13712.
Weijiang Yu, Yingpeng Wen, Fudan Zheng, and Nong Xiao. 2021. Improving math word problems with pre-trained knowledge and hierarchical reasoning.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 3384–3394.
Ma Yuhui, Zhou Ying, Cui Guangzuo, Ren Yun, and Huang Ronghuai. 2010. Frame-based calculus of solving arithmetic multi-step addition and subtraction word problems. In *2010 Second International* Workshop on Education Technology and Computer Science, volume 2, pages 476–479. IEEE.
Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. 2019. Graph transformer networks. Advances in neural information processing systems, 32.
Dongxiang Zhang, Lei Wang, Luming Zhang, Bing Tian Dai, and Heng Tao Shen. 2019. The gap of semantic parsing: A survey on automatic math word problem solvers. *IEEE transactions* on pattern analysis and machine intelligence, 42(9):2287–2305.
Jipeng Zhang, Roy Ka-Wei Lee, Ee-Peng Lim, Wei Qin, Lei Wang, Jie Shao, and Qianru Sun. 2020a.
Teacher-student networks with multiple decoders for solving math word problem. IJCAI.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graph-totree learning for solving math word problems. Association for Computational Linguistics.
Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015.
Learn to solve algebra word problems using quadratic programming. In *Proceedings of the 2015* Conference on Empirical Methods in Natural Language Processing, pages 817–822.
Yanyan Zou and Wei Lu. 2019. Text2math: End-to-end parsing text into math expressions. arXiv preprint arXiv:1910.06571.
## A Appendix A.1 Dataset Split
We use an 80:10:10 train-validation-test split for our PARAMAWPS dataset. For MAWPS, we use 5-fold cross-validation using the splits provided by its authors Koncel-Kedziorski et al. (2016). The SVAMP dataset is a challenge set and all 1,000 of its samples constitute the test set while the model itself is trained on a combination of the MAWPS and ASDIV-A (Miao et al., 2021) dataset.
## A.2 Performance Evaluation And Metric
We use Negative log-likelihood loss (NLLLoss)
for training all the models. For the baseline models, MWPToolkit uses two metrics of accuracy, Equation Accuracy and *Value Accuracy*. Equation accuracy measures the correctness of the generated equation. Value accuracy measures the correctness of the value yielded from evaluating the generated equation. This metric takes into consideration the fact that models may generate equations that have a different template than the respective ground truth equations but nevertheless yield the correct answers to the problem statements.
## A.3 Hyperparameters
In the DeBERTa model, we use embedding dimension d = 768, F F N*size* = 1024, number of decoder layers N = 4, number of attention heads h = 16, dropout ratio P*drop* = 0.5, learning rate lr = 10−5, batch size b = 8, and *Epochs* = 200.
The hyperparameters for the other baseline models are as set on the respective MWPToolkit implementations.
## A.4 Optimizer
We use Adam (Kingma and Ba, 2014) with a StepLR learning rate scheduler as our optimizer.
The learning rate lr is set according to Vaswani et al. (2017), lr = d−0.5· min(n−0.5, n · w−1.5)
where, d is the embedding dimension, n is the step number and w is the number of warm-up steps. Here, warm-up steps w simply insinuate that the learning rate rises linearly for the initial w training steps. We set β1 = 0.9, β2 = 0.999, ϵ = 10−8 and w = 1500 for the models' Adam optimizer. For the StepLR scheduler, we set γ = 0.5 and step_*size* = 5.
## A.5 Hardware And Schedule
We have used the NVIDIA RTX 3090 GPU
equipped with 25GB of VRAM and Intel Core i9 Processor for conducting our experiments. The DeBERTa model took around 18 hours to fully train on the PARAMAWPS dataset with 5-fold cross-validation and 200 epochs per fold, which was the highest expense of time among the lot.
The other baseline models took approximately 7 to 9 hours on the PARAMAWPS dataset and around 5 hours on MAWPS and SVAMP. The greater the number of parameters that a model possesses the more time it takes to fully complete the 5-fold training process. As DeBERTa has an astounding 134 million parameters (He et al., 2020), it takes the longest time to train.
![15_image_0.png](15_image_0.png)
| Variation | Original | Variation |
|------------------------------------------------------------------------------------------------------------|----------------------------------------------------------|-----------------------------------------------------|
| Type | There were originally 20817 houses in Lincoln | How many houses are there in Lincoln County |
| Changed | County. During a housing boom, developers built | now, after developers built an additional 97741 |
| phrase order | 97741. How many houses are there now in Lin- | during a housing boom, when there were origi- |
| coln County? | nally 20817 houses? | |
| questions correct in the first half and 5 questions | While playing a game of Hangman, Emily guessed | |
| Changed | 3 letters correctly in the first half and 5 letters cor- | |
| object and | correct in the second half. If each question was | rectly in the second half. If each letter was worth |
| entity names | worth 3 points, what was his final score? | 3 points, what was her final score? |
| A carpenter bought a piece of wood that was 8.9 | | |
| Aded | A carpenter bought a piece of wood that was 8.9 | centimeters long. Then he sawed 2.3 centimeters |
| unrelated | centimeters long. Then he sawed 2.3 centimeters | off the end and sanded the wood for 20 minutes. |
| information | off the end. How long is the piece of wood now? | How long is the piece of wood now? |
| Inverted | Mary bought 3 pizzas for $8 each. What was the | If Mary paid $24 for 3 pizzas, how much did she |
| total amount she paid for the 3 pizzas? | | |
| question | pay for each pizza? | |
| Table 7: Types of Variations with examples. The problems in the Original column are samples taken from the | | |
![16_image_0.png](16_image_0.png)
|
chirkova-etal-2023-marginalize | Should you marginalize over possible tokenizations? | https://aclanthology.org/2023.acl-short.1 | Autoregressive language models (LMs) map token sequences to probabilities. The usual practice for computing the probability of any character string (e.g. English sentences) is to first transform it into a sequence of tokens that is scored by the model. However, there are exponentially many token sequences that represent any given string. To truly compute the probability of a string one should marginalize over all tokenizations, which is typically intractable. Here, we analyze whether the practice of ignoring the marginalization is justified. To this end, we devise an importance-sampling-based algorithm that allows us to compute estimates of the marginal probabilities and compare them to the default procedure in a range of state-of-the-art models and datasets. Our results show that the gap in log-likelihood is no larger than 0.5{\%} in most cases, but that it becomes more pronounced for data with long complex words. | # Should You Marginalize Over Possible Tokenizations?
Nadezhda Chirkova1 Germán Kruszewski1 Jos Rozen1 **Marc Dymetman**2 1Naver Labs Europe 2Independent Researcher
{nadia.chirkova, german.kruszewski, jos.rozen}@naverlabs.com [email protected]
## Abstract
Autoregressive language models (LMs) map token sequences to probabilities. The usual practice for computing the probability of any character string (e.g. English sentences) is to first transform it into a sequence of tokens that is scored by the model. However, there are exponentially many token sequences that represent any given string. To truly compute the probability of a string one should *marginalize* over all tokenizations, which is typically intractable.
Here, we analyze whether the practice of ignoring the marginalization is justified. To this end, we devise an importance-sampling-based algorithm that allows us to compute estimates of the marginal probabilities and compare them to the default procedure in a range of state-ofthe-art models and datasets. Our results show that the gap in log-likelihood is no larger than 0.5% in most cases, but that it becomes more pronounced for data with long complex words.
## 1 Introduction
Language models are probability distributions over text strings. In practice, these distributions are defined over a vocabulary of *tokens*, such as words, punctuation marks, and other special symbols (Jurafsky, 2000; Goldberg, 2017). As long as a unique token sequence encodes any given string, the probability of a string according to the language model is equal to the probability of the corresponding token sequence. However, with today popular sub-wordlevel tokenizations this is not the case, as there are (exponentially) many possible tokenizations for any given string. For example, with the vocabulary V = {*a, ab, b, c, ca, cab*}, the string *"cab"* can be tokenized into *cab, c/a/b, ca/b, c/ab*. Therefore, the *true* probability that the language model assigns to the corresponding string is that obtained after marginalizing over *all possible tokenizations*. Yet, the common practice disregards this fact, computing the string probability by scoring a single *default* tokenization (e.g., cab). The implicit assumption
![0_image_0.png](0_image_0.png)
Figure 1: Illustration of the proposed procedure for sampling tokenization T and calculating its proposal probability Q = Q(T|S) from a sequence of blocks B,
produced by splitting sequence S.
from the community is that the probability mass of non-default tokenizations is negligible. However, this assumption has not been adequately evaluated yet.
In part, Cao and Rimell (2021) addressed this very same question, by conducting a pioneer study to quantify the gap between the default and marginalized probabilities. Their experiments with Transformer-XL pretrained on the WMT data (English and German) show negligible changes in perplexity with respect to using a single default tokenization for in-domain data and 0.9–1.9% improvement in perplexity for out-of-domain data, such as arXiv articles. Because exact marginalization is intractable in practice, marginalized probabilities were estimated using importance sampling.
Importance sampling computes an unbiased estimate of the marginalized probabilities as an average over tokenizations sampled from a proposal distribution. Cao and Rimell (2021) exploited the probabilistic nature of the UnigramLM tokenizer (Kudo, 2018) to define such a proposal. As a consequence, their results do not necessarily extend to the more popular language models like GPT-2 (Radford et al., 2019), GPT-3 (Brown et al.,
2020), BLOOM (Scao et al., 2022), T5 (Raffel et al., 2020), among others, trained using other tokenization schemes such as BPE (Sennrich et al.,
2016), WordPiece (Schuster and Nakajima, 2012),
among others.
In this work, we devise a new proposal distribution that allows us to quantify the effect of marginalization for any given tokenizer. Equipped with this algorithm, we inspect the effect of marginalization over tokenizations for two LMs, GPT-2
(126M parameters, English) and the recently released BLOOM (1.7B parameters, multilingual),
on various domains and languages. Our importance sampling estimates show that in practice marginalization does not influence log-likelihood much
(usually less than 0.5% improvement), the highest influence (1–2% improvement) being for data with long, complex words and distribution shift.
Because the results will vary for different models and data, we provide a tool for researchers and practitioners to measure the gap in their specific setting to decide whether the usual practice is warranted.
To this end, we release our code1, which can be applied to models from the transformers library.
## 2 Methodology 2.1 Preliminaries
Let us consider a sequence of characters S that we wish to score with an autoregressive language model P. Typically, S is split into a sequence T = t1*, . . . , t*n of tokens ti ∈ V , where V is the model's vocabulary, a process commonly known as *tokenizing* the sequence. Then we can compute a score for a tokenization T of the sequence S,
P(*T, S*), using the chain rule:
$$P(T,S)=1[T\to S]\prod_{j=1}^{|T|}P(t_{j}|t_{j-1},\ldots,t_{1})$$
where T → S indicates that T is a valid tokenization of S. Commonly used tokenization algorithms such as BPE or WordPiece provide a deterministic procedure for obtaining a *particular* way of tokenizing S into T, which we refer to as the *default* tokenization. Yet, in general, for the same sequence, there exist (exponentially) many possible tokenizations with vocabulary V , which also typically receive some probability mass by the LM. To obtain the *true* probability score for the sequence S,
we should marginalize over all valid tokenizations:
P(S) = PT:T→S P(*T, S*).
However, computing P(S) is typically intractable given the exponential number of valid Algorithm 1 Proposal algorithm
Input: sequence S; max. block size L; max. number of
tokenizations per block M
Output: a tokenization T sampled with prob. Q(T|S)
1: T ← [ ]; q ← 1
2: B ← split_in_blocks(S, L) 3: for i = 1, . . . , |B| do 4: X ← get_all_tokenizations(Bi, M) 5: for j = 1, . . . , |X| do 6: sˆj ← LM(Xj |T) 7: for j = 1, . . . , |X| do 8: sj = ˆsj/Pj sˆj 9: j∗ ← sample(s1, . . . , s|X|) 10: T ← concat(T, Xj∗)
11: q ← q · sj∗ 12: Q(T|S) ← q 13: **return** T, Q(T|S)
tokenizations. Nonetheless, this value can be estimated through importance sampling, as follows.
Introducing a proposal distribution Q(T|S) over all tokenizations T of a sequence S, such that P(*T, S*) > 0 ⇒ Q(T|S) > 0, we can rewrite the probability P(S), as follows:
$$P(S)=\sum_{T:T\to S}P(T,S)=\mathbb{E}_{Q(T|S)}{\frac{P(T,S)}{Q(T|S)}}\ \ (1)$$
Now we can estimate P(S) by sampling K independent tokenizations from the proposal:
$$P(S)\approx{\frac{1}{K}}\sum_{k=1}^{K}{\frac{P(T_{k},S)}{Q(T_{k}|S)}},\quad T_{k}\sim Q(T|S)\ \ (2)$$
The quality of this estimate depends on the chosen proposal distribution: the closer the proposal Q(T|S) is to the true posterior distribution P(T|S), the smaller the variance of the unbiased estimate (2) tends to be.2
## 2.2 Proposed Approach
We introduce a novel proposal Q(T|S) based on the LM itself with the intention to make it naturally closer to the posterior. Importantly, this proposal can be used for any tokenizer enabling its application to well-known state-of-the-art systems. The procedure for sampling from this proposal is presented in Algorithm 1 and also illustrated in Figure 1. In summary, the algorithm samples a tokenization T by building it incrementally as the concatenation of token subsequences Ti. Each token subsequence is sampled from the language model while 2If we had access to the true posterior distribution P(T|S),
we would have P (*T ,S*)
P (T |S) = P(S), and therefore (i) one sample would be enough to obtain the needed value P(S), and (ii) the variance of the importance sampling estimate would be zero.
always ensuring that the resulting tokenization is valid for the target S. To achieve this, the algorithm breaks S into a sequence of character blocks B, and only samples tokenizations Tithat are valid for the corresponding block Bi. Notably, in the extreme case of splitting S into a single block B1 = S,
our proposal Q(T|S) turns into the true posterior P(T|S), allowing to compute the exact marginalization with a single sample, as noted in footnote 2.
However, because sampling a valid tokenization of a block requires renormalizing over all such valid tokenizations, this extreme instantiation would defeat the purpose of the algorithm as it would be equivalent to computing the full marginalization.
Instead, we consider block sizes over which we can practically compute the renormalization constant by, for example, using whitespace-separated words as blocks. Still, because this can sometimes lead to impractically-sized blocks with a number of tokenizations that can exceed what we can reasonably score with a LM, we limit the maximum block size to a parameter L and we only score the top M
block tokenizations inversely sorted by their number of tokens3. The resulting algorithm requires O(|B| × M) evaluations of the LM per-sample, where |B| is the number of blocks used to split the sequence S. In Appendix E, we validate that, for short sentences with a tractable amount of possible tokenizations, for which we can actually compute the true value of the marginalization, our algorithm provides quite precise estimates.
## 3 Experiments
Experimental setup. We experiment with two language models, GPT-2 (Radford et al. 2019, 126M parameters, English) and the recently released BLOOM (Scao et al. 2022, 1.7B parameters, 45 natural and 12 programming languages).
We select the following datasets for evaluating the LMs, which cover different styles and languages: Wikipedia articles (En), Twitter posts
(En), CNN news (En), Transcriptions of White House Speeches (En), Flores-200 (sentences from Wikipedia in various languages, including highresource, low-resource, latin and non-latin scripts),
Python and C++ code (one recently released repository for each language). We concatenate texts into sequences of length 800 tokens (as measured by the default tokenization) to provide longer context
| Exp. | BPCdf | BPCis | BPC | % rel. | % |
|---------------------|---------|---------|-------|----------|------|
| gap | gap | ND | | | |
| GPT-2 (125M params) | | | | | |
| Wiki | 1.1076 | 1.1026 | .0050 | 0.45% | 0.9% |
| Twit | 1.9610 | 1.9303 | .0307 | 1.56% | 4.2% |
| News | 0.9421 | 0.939 | .0028 | 0.30% | 0.4% |
| Tr.sp. | 1.0234 | 1.0029 | .0204 | 1.99% | 1.5% |
| BLOOM (1.7B params) | | | | | |
| Twit | 1.7889 | 1.7653 | .0236 | 1.32% | 3.3% |
| News | 0.8499 | 0.8462 | .0037 | 0.55% | 0.4% |
| Tr.sp. | 0.9022 | 0.9002 | .0020 | 0.23% | 0.4% |
| Chi† | 1.2080 | 1.2024 | .0056 | 0.46% | 3.1% |
| Fra | 0.8001 | 0.7993 | .0008 | 0.10% | 0.2% |
| Spa | 0.8813 | 0.8800 | .0013 | 0.14% | 0.3% |
| Vie | 0.7939 | 0.7932 | .0008 | 0.10% | 0.1% |
| Ind | 0.9812 | 0.9778 | .0034 | 0.34% | 0.6% |
| Eus | 1.2432 | 1.2269 | .0163 | 1.31% | 3.5% |
| Urd† | 0.8785 | 0.8697 | .0088 | 1.00% | 1.8% |
| Python | 0.5100 | 0.5071 | .0029 | 0.56% | 1.3% |
| C++ | 0.6053 | 0.5993 | .0059 | 0.98% | 2.2% |
for the LM. We evaluate on 100 sequences per dataset (Flores-200, CNN news and Code datasets are shorter). We refer to Appendix A for more details on the data and how we check that the LMs were not trained on the evaluation data.
We measure the cross entropy (in BPC4) between the data and the model according to the default tokenization (BPCdf) and between the data and the marginalized model according to the importance sampling estimate (BPCis), as well as their difference BPCdf − BPCis referred to as the BPC gap, and also the normalized difference
(BPCdf −BPCis)/BPCdf (relative BPC gap). Furthermore, we compute a 90% confidence interval [BPCL
is, BPCR
is ] around BPCis, using bootstrap resampling (Wasserman, 2004, Chapter 8)
for n = 1000 trials5. Additionally, we report the proportion of blocks for which our algorithm samples non-default tokenizations (%ND).
As for hyperparameters, we use M = 128 and choose L to be the maximum token length in the default tokenization of the evaluation data. We provide empirical validation for both these hyper-
![3_image_0.png](3_image_0.png)
parameters in Appendices D and C, respectively.
We sample K = 30 tokenizations per sequence.
Results Table 1 presents our main results. We generally observe a low relative BPC gap (< 0.5%), but in some cases exceeding 1%, e.g. 1.3–
1.5% on Twitter, 2% on transcribed speech data, 1.3% on the Basque language (Eus) or 1% on the Urdu language (Urd). We note that dataset/model pairs with higher relative gap tend to be connected with low-resource languages (Basque and Urdu), non-latin scripts (Urdu and Chinese), and data distribution shift (transcribed speech, Twitter). Moreover, we observe a higher gap to be associated with a higher percentage of non-default tokenizations sampled by our algorithm (%ND).
To learn more about the factors driving the probability of sampling the default tokenization, we bin blocks (which roughly correspond to words) from Wikipedia by the probability that our proposal assigns to their default tokenization, Q(df.), when using GPT-2 as a model. Table 2 shows a few examples of blocks from each bin alongside the bin's frequency. As can be seen, high probability of sampling the default tokenization usually corresponds to common and simple words, whereas low probability corresponds to complex and rare words.
From this observation, we conjecture that higher gaps are at least in part driven by the presence of long complex words in the datasets.
Finally, Figure 2 visualizes confidence intervals on BPC gaps for *individual* sequences across several datasets. Additional results are given in Appendix F. In particular, we plot the left limit of the confidence interval for the BPC gap (BPCL
is −
BPCdf) on the x-axis and the width of the interval
(BPCR
is − BPCL
is) on the y-axis (non-negative by definition). If a dot is located to the right of 0, it means that we are highly confident that the BPC gap is positive on that individual sequence. The farther the dot is on the x-axis, the higher the cor-
Q(df.) Freq. Example blocks
>0.999 90% Many, are, the, larger, amphibians,
superficially, resemble
![3_image_1.png](3_image_1.png)
6.1% crocodiles, whenever, bases, Rifenburg, sailed, precursors 2.2% warships, propelled, Tomasz, redemption, Metoposaurus
0.7% paedomorphic, Peltobatrachus, ironclad, Urabi, Tonnante
0–0.5 0.7% temnospondyls, brevirostrine, Pugong, saurus, semiaquatic
responding BPC gap is. Likewise, the lower the value on the y-axis, the lower is the variance of our estimate of the marginalized probability and, consequently, of the BPC gap. As can be seen, we obtain low-variance predictions for most of the sequences, and for almost all of them we can observe a positive BPC gap. Moreover, we can note a distributional difference between dataset/model pairs with a low BPC gap (such as those on the right-hand side of Figure 2, with points concentrated close to the 0 value) and those with high BPC gap (such as those represented on the left-hand side of Figure 2, with points spread up to the right).
## 4 Related Work
Stochastic tokenization or marginalisation over tokenizations were widely investigated in the context of model *training* (Grave et al., 2019; van Merriënboer et al., 2017; Buckman and Neubig, 2018; Provilkov et al., 2020; Kudo, 2018) or learning better tokenizers (He et al., 2020); in contrast, we evaluate the effect of marginalization at the *inference* stage, when the tokenizer and the LM were trained in the default, commonly-used way. The closest study to ours is Cao and Rimell (2021), which relies on the stochastic version of the UnigramLM tokenizer as their proposal Q, and thus their approach is inapplicable to LMs with other tokenizers. They also had to introduce a set of heuristics such as imposing consistent tokenization of repeated words or enforcing the default tokenization to be included among the sampled tokenizations, to make this proposal closer to the posterior and to decrease the variance of importance sampling.
## 5 Conclusion
In this work, we have studied the effect of marginalization over possible tokenizations in language modeling. For this, we introduced a novel proposal distribution over tokenizations, which is used in the importance sampling algorithm to obtain estimates of the marginalized probability, and that can be applied to any tokenizer and language model. Our results show that the overall effect of marginalization over tokenizations is often smaller than 0.5%,
although it becomes more pronounced for data with long complex words or distribution shift. We release our code to allow practitioners to check the effect of marginalization for their models of interest.
## Limitations
The main limitation of the proposed approach is that it would be relatively costly to apply at production time, compared to the conventional LM evaluation. First, it requires drawing a number of tokenization samples, as defined by importance sampling, in contrast to a single pass through the evaluated sequence in the conventional approach. Second, the conventional approach can be conducted with teacher forcing and efficiently parallelized, while the proposed approach relies on block-byblock sequential processing. Nonetheless, the proposed algorithm is designed for analysis purposes rather than to be used in production systems, for which it is feasible to run it in a reasonable time, allowing users to evaluate the effect of marginalization for any tokenizer and language.
## Broader Impact
As the work is dedicated to evaluating existing models on publicly available datasets, we are not aware of any potential negative impact.
## Acknowledgements
We would like to thank Matthias Gallé for his valuable feedback.
## References
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Jacob Buckman and Graham Neubig. 2018. Neural lattice language models. *Transactions of the Association for Computational Linguistics*, 6:529–541.
Kris Cao and Laura Rimell. 2021. You should evaluate your language model on marginal likelihood over tokenisations. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 2104–2114, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loïc Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation. *arXiv preprint* arXiv:2207.04672.
Yoav Goldberg. 2017. *Neural network methods for* natural language processing, volume 10. Morgan &
Claypool Publishers.
Edouard Grave, Sainbayar Sukhbaatar, Piotr Bojanowski, and Armand Joulin. 2019. Training hybrid language models by marginalizing over segmentations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1477–1482, Florence, Italy. Association for Computational Linguistics.
Xuanli He, Gholamreza Haffari, and Mohammad Norouzi. 2020. Dynamic programming encoding for subword segmentation in neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3042–3051, Online. Association for Computational Linguistics.
Dan Jurafsky. 2000. *Speech & language processing*.
Pearson Education India.
Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. *arXiv preprint arXiv:1804.10959*.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models.
Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation, pages 1–17.
Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita.
2020. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1882–1892, Online. Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´
Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Rush Alexander M. Tow, Jonathan, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model.
Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In *2012 IEEE international* conference on acoustics, speech and signal processing (ICASSP), pages 5149–5152. IEEE.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725.
Bart van Merriënboer, Amartya Sanyal, Hugo Larochelle, and Yoshua Bengio. 2017. Multiscale sequence modeling with a learned dictionary.
Larry Wasserman. 2004. *All of statistics: a concise* course in statistical inference, volume 26. Springer.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
## A Data
We consider the following datasets:
- Wikitext (h t t p s : / / h u g g i n g f a c e . c o / d a t a s e t s / w i k i t e xt, wikitext-2-raw-v1 test subset, Merity et al., 2016, CC BY-SA 4.0 license);
- Twitter posts (https://huggingface.
co/datasets/tweet_eval, emoji test subset, Mohammad et al., 2018);
- CNN News (https://www.kaggle.c om/datasets/hadasu92/cnn-art icles-after-basic-cleaning, CC0 license);
- The White House Speeches (https://ww w.kaggle.com/datasets/mohamedk haledelsafty/the-white-house
-speeches-and-remarks-12102022, CC0 license);
- Flores-200 (https://github.com/fac ebookresearch/flores/tree/main
/flores200, Costa-jussà et al., 2022, CC
BY-SA 4.0 license);
- Python Code (all .py files from https:
//github.com/naver/disco, Creative Commons Attribution-NonCommercialShareAlike 4.0 license);
- C++ Code (all .h and .cc files from https:
//github.com/microsoft/Tries te, MIT license).
Wikitext and White House Speeches datasets consist of paragraphs extracted from Wikipedia articles (wikipedia.org) or from transcribed speeches. Flores-200 is composed of sentences extracted from English Wikipedia and translated by professional translators into 200 languages. Python and C++ Code data consists of code files. Twitter /
News datasets consist of separate tweets / news articles. We compose sequences to evaluate an LM on, by concatenating texts listed above into sequences of 800 tokens according to the default tokenization
(concatenated texts are separated by \n\n). The sequence always begins with a new text. Code and News data contains texts longer than 800 tokens, these texts are considered as separate sequences and clipped to 800 tokens. Table 3 reports statistics of the data. Maximum 100 sequences per dataset
Dataset **Av. / max**
length
Total # of sequences
Wikitext 98 / 556 100 Twitter 20 / 159 100 News 833 / 2940 63 Tr. sp. 33 / 158 100
Flores (En) 27 / 69 37 Python 320 / 2623 6
C++ 2296 / 16324 12
are considered (Flores-200 dataset, News dataset and code data are shorter).
We checked that the data we evaluate on was not used in model training as follows. GPT-2 was not trained on Wikipedia data, as reported in its paper (Radford et al., 2019). BLOOM was trained on Wikipedia data, so we do not evaluate it on Wikipedia and English Flores data. At the same time, data for other languages is based on translations, which makes it safe to use it for evaluation. Twitter is not listed in data sources for GPT-2 (https://github.com/openai/
gpt-2/blob/master/domains.txt) and BLOOM (https://huggingface.co/spa ces/bigscience/BigScienceCorpus).
For evaluation on code, we use the code of the libraries created after the BLOOM's training. Likewise, for evaluation on the news and White House speech data, we selected only texts released after 11.03.2022 (after the beginning of the largest BLOOM model's training).
## B **Additional Information On Experiments**
The BLOOM model is released under the Responsible AI License, and GPT-2 is released under the Modified MIT License. Our code is based on the transformers library (Wolf et al., 2020) which is released under the Apache License 2.0 license.
All assets allow usage for research purposes. Evaluation of the GPT-2 model was conducted on a single Tesla V-100 GPU (24–48 GPU hours per dataset), and evaluation of the BLOOM model conducted on a single Tesla A100 GPU (72–120 GPU
hours per dataset).
![7_image_0.png](7_image_0.png)
| L | %T2 | %T1 | BPC gap | % BPCis < BPCdf |
|-----|-------|-------|-----------|-------------------|
| 17 | 0.08 | 0.5 | -0.00108 | 77 |
| 19 | 0 | 0.33 | 0.00084 | 100 |
| 21 | 0 | 0.08 | 0.00079 | 98 |
| 19 | 0.3 | 1.3 | 0.0019 | 62 |
| 21 | 0 | 0.3 | 0.0087 | 100 |
| 23 | 0 | 0.2 | 0.0089 | 100 |
![7_image_2.png](7_image_2.png)
## C Segmentation Into Blocks
As discussed in Section 2.2, the proposal algorithm splits the sequence into a sequence of blocks. In our experiments, we split the sequence at white spaces and new line characters, thus making blocks roughly correspond to words. Because our algorithm computes all possible tokenizations within a block, this process can become prohibitively expensive for long blocks, which can occur with complex words or in languages that do not frequently use the white space character, such as Chinese. For this reason, we define a *maximum block length* hyperparameter, L. Words that have length lower or equal to L are denoted as type 0 (T0) blocks. If a word has length larger than L, it is split into smaller blocks, as follows. First, we compute the block's default tokenization and incrementally merge the tokens while checking not to exceed L. Once the limit is reached, a new block is started. The resulting blocks are denoted as type 1 (T1) blocks.
Suppose at any point a token of length larger than L is encountered. In that case, this token is cropped at L, and the remaining characters are then moved to a new block. These blocks are denoted as type 2
(T2) blocks. Figure 3 illustrates these three block types.
Table 4 illustrates the effect that the maximum block length hyperparameter L has for BLOOM
on French (low-gap case) and Urdu (higher-gap
![7_image_1.png](7_image_1.png)
case). We experiment with three values of L to represent various proportions of T1 and T2 blocks. For low values of L (L = 17 and L = 19 for French and Urdu, respectively), we observe some small or even negative gap in BPC, and a large percentage of sequences that have higher cross-entropy when using the marginal than when using the default tokenization. This result comes with a small but non-negligible percentage of T2 blocks. Because T2 splits a token that is selected by the default tokenization across different blocks, this prevents the proposal from ever sampling the default tokenization, resulting in a poor estimate. Higher values of L result in the elimination of any T2 blocks with also a moderate impact on T1 blocks. Yet, once T2 blocks are eliminated, the number of T1 blocks does not appear to have a sizeable effect. Overall, these results provide the rule for selecting L: it should be set to the maximum length of the tokens in the default tokenization of the evaluation data in order to avoid T2 blocks.
## D Limiting The Number Of Tokenizations Per Block
The proposed importance sampling algorithm limits M, the number of tokenizations per block which are scored with LM, for better efficiency. In this section we motivate why it is not harmful for the results. In the top plot of Figure 4 we show that the proposal probability of a block's tokenization 8
![8_image_0.png](8_image_0.png)
strongly correlates with the number of subtokens in the tokenization. This motivates selecting top-M tokenizations per block by sorting the block's tokenizations by the decreasing number of subtokens (we use M = 128 in our experiments). Now, in the bottom plot of Figure 4 we present the 2dhistogram of proposal probabilities of blocks' tokenizations and their ranks in the sorting. It can be seen that the proposal probabilities of tokenizations with ranks higher than 10 have very low probabilities, i.e. usually lower than 10−10. In fact, tokenizations with ranks greater than 70 were never sampled
(in 99.95% one of the first 10 tokenizations was sampled, in 0.05% cases - one of tokenizations with indices 11–40, and in 0.0004% cases - with ranks 40–69.).
## E **Algorithm Validation On Short Sentences**
To validate the proposed algorithm, we compare the marginal BPC estimated with our algorithm to the true marginal BPC, BPCm, obtained by enumerating all tokenizations of several relatively short sentences (⩽ 25 characters, < 1M tokenizations).
From Table 5 we observe that for sentences with relatively high BPC gap (N1–2), our estimate BPCis is close to BPCm, with a thin confidence interval
![8_image_1.png](8_image_1.png)
which includes BPCm. N3 shows the case with lower BPC gap, for which our estimate BPCis is between BPCdf and BPCm, and the confidence interval is wider but still includes BPCis. N4–6 show the case of low BPC gap, in which our proposal always sampled the default tokenization hence there is no variance. In all three cases the difference between our estimate (BPCis) and the marginal
(BPCm) is 3–100 times smaller than between default (BPCdf) and marginal (BPCm). Finally, N7 shows the case with low BPC gap, in which our proposal did sample some non-default tokenizations, and the resulting estimate BPCis was larger than BPCdf. However, this ordering almost never happens with long texts, which is the intended use-case of our algorithm. To summarize, in almost all cases our algorithm produces a quite precise estimate.
## F Additional Confidence Interval Plots
Figure 5 shows additional confidence interval plots.
The conclusions are the same as for plots in the main text.
| Block's frequency | >=1e-4 | <1e-4 |
|-------------------------------------|----------|---------|
| % such blocks | 0.602 | 0.398 |
| % sampled default tokenizations | 0.978 | 0.925 |
| % sampled non-default tokenizations | 0.022 | 0.075 |
| % sampled length-1 tokenizations∗ | 0.829 | 0.306 |
| % sampled length-2 tokenizations∗ | 0.137 | 0.299 |
| % sampled length>=3 tokenizations∗ | 0.034 | 0.395 |
## G Additional Analysis
The intuition why the impact of non-default tokenizations becomes more pronounced for complex words, low-resource languages and data distribution shift is that all these cases are characterized by the appearance of blocks which were rarely or never seen during training. Roughly speaking, frequent words are encoded with short token sequences (1-2 tokens) by design of the tokenizer.
Furthermore, the language model assigns high probability to the default tokenizations of these words because it saw them frequently during training. As a result, the effect of marginalization is small. In contrast, rare words are encoded with longer token sequences, and because they are not frequently seen during training, the language model can assign high probabilities to other tokenizations than a default one.
To illustrate given reasoning, Table 6 reports the distribution of number of tokens in sampled block's tokenization, for low-frequency and highfrequency blocks, for GPT-2 on Twitter data. Lowfrequency blocks are split into more tokens and have a higher portion of non-default tokenizations.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Broader impact
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And Appendices A, B
✓ B1. Did you cite the creators of artifacts you used?
Section 3 and Appendices A, B
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendices A, B
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix B
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Our research is devoted to evaluating log-likelihood of existing models, we do not release any new models or textual artefacts. That is why we do not anticipate any harms from our work.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3 and Appendix A
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A
## C ✓ **Did You Run Computational Experiments?** Section 3 And Appendices
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3 and Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3 and Appendices C, D
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
yoshinaga-2023-back | Back to Patterns: Efficient {J}apanese Morphological Analysis with Feature-Sequence Trie | https://aclanthology.org/2023.acl-short.2 | Accurate neural models are much less efficient than non-neural models and are useless for processing billions of social media posts or handling user queries in real time with a limited budget. This study revisits the fastest pattern-based NLP methods to make them as accurate as possible, thus yielding a strikingly simple yet surprisingly accurate morphological analyzer for Japanese. The proposed method induces reliable patterns from a morphological dictionary and annotated data. Experimental results on two standard datasets confirm that the method exhibits comparable accuracy to learning-based baselines, while boasting a remarkable throughput of over 1,000,000 sentences per second on a single modern CPU. The source code is available at \url{https://www.tkl.iis.u-tokyo.ac.jp/ynaga/jagger/} | # Back To Patterns: Efficient Japanese Morphological Analysis With Feature-Sequence Trie
Naoki Yoshinaga Institute of Industrial Science, The University of Tokyo [email protected]
## Abstract
Accurate neural models are much less efficient than non-neural models and are useless for processing billions of social media posts or handling user queries in real time with a limited budget. This study revisits the fastest patternbased NLP methods to make them as accurate as possible, thus yielding a strikingly simple yet surprisingly accurate morphological analyzer for Japanese. The proposed method induces reliable patterns from a morphological dictionary and annotated data. Experimental results on two standard datasets confirm that the method exhibits comparable accuracy to learning-based baselines, while boasting a remarkable throughput of **over 1,000,000 sentences per second**
on a single modern CPU. The source code is available at https://www.tkl.iis.u-tokyo.
ac.jp/~ynaga/jagger/.
## 1 Introduction
The amount of text data being processed has greatly increased since the advent of communication platforms such as Twitter, Zoom, and Slack, and NLP
services such as DeepL and Grammarly have millions of users. Some users analyze textual big data for marketing, linguistics, or sociology, while others deploy NLP services on their own devices because of privacy concerns. It is therefore becoming important to develop highly efficient methods to process massive text data and user queries with limited computational resources.
However, the recent campaign for efficient NLP
does not focus on literally efficient methods that scale to increasing data sizes and run on resourceconstrained devices. Instead, most "efficient" NLP
studies (Treviso et al., 2022) focus on neural methods, which are too slow to handle billions of social media posts and too large to deploy on edge devices.
Those studies seek to make model training or inference *relatively* efficient within the deep learning framework. Thus, the large efficiency gap with respect to classical methods has never been filled.
![0_image_0.png](0_image_0.png)
In this study, I take an orthogonal approach toward *absolutely* efficient NLP by seeking to boost the accuracy of the fastest methods. Specifically, I have developed a remarkably simple yet accurate method for Japanese morphological analysis, which is a joint task of word segmentation, part-of-speech (POS) tagging, and lemmatization.
This method revisits the classical longest matching method; it greedily applies patterns that determine the next position to segment and then identifies the POS tag for the segmented word, as illustrated in Figure 1. To obtain reliable patterns, starting from words in a morphological dictionary and training data, patterns are extended with posterior surface contexts and previous POS tags, and the patterns' segmentation offsets and tags are determined by frequency. The extracted patterns are then stored in an efficient double-array trie (Aoe, 1989).
The proposed method was evaluated on two standard corpora (Kurohashi and Nagao, 2003; Hangyo et al., 2012). The experimental results confirmed that this simple method can process 1,000,000 sentences per second on an M2 MacBook Air, with comparable accuracy to learning-based baselines (Kudo et al., 2004; Neubig et al., 2011).
Algorithm 1 Pattern-based morphological analysis INPUT: sequence of characters, c; set of patterns stored in trie, P = {(p, shift, t)}
OUTPUT: sequence of words with tags s = {(wj , tj )}
1: i ← 0 2: **while** i < len(c) do 3: (
ˆ shift,tˆ) = longest_prefix_search(c≥i, P)
4: append(s,(c i+shift ˆ
i,tˆ))
5: i ← i + ˆ shift 6: **return** s
## 2 Pattern-Based Morphological Analysis
This section describes the method of Japanese morphological analysis used here, which performs word segmentation, POS tagging, and lemmatization. To maximize the tagging efficiency, I return to a pattern-based algorithm that is similar to the longest matching algorithm (Nagata, 1994).
The longest matching algorithm performs deterministic word segmentation by using a dictionary.
Starting from the beginning of the input, it greedily finds the longest dictionary words to segment the input. Although this simple algorithm exhibits moderate accuracy in Chinese and Japanese with transformation rules (Palmer, 1997; Hockenmaier and Brew, 1998; Sassano, 2014), there is a gap in accuracy from search- and classification-based approaches (Kudo et al., 2004; Neubig et al., 2011).
To make search-based morphological analysis partially deterministic, Morita and Iwakura (2019) extracted surface patterns from tagging results; however, the speed-up factor was at most 1.5.
## 2.1 Basic Algorithm
Algorithm 1 is a simple, deterministic algorithm for joint word segmentation, POS tagging, and lemmatization. It repeatedly applies the longest-matching patterns in a trie P to a given sequence of characters, c, and a start position i to segment and tag the next word (wj = c i+ ˆshift iand tˆj ). As will be shown later in § 3, this simple algorithm *works* as well as learning-based approaches.
This algorithm is inspired by the longest matching algorithm but differs in that the segmentation offset shift can be smaller than the surface length matched with patterns, k (see Line 7 in Algorithm 2). A running example is shown in Figure 1.
The algorithm is also inspired by the precomputation of feature weights in sequence labeling (Kaji et al., 2010) and classification with conjunctive features (Yoshinaga and Kitsuregawa, 2009, 2010, 2014). Those methods accumulate certain feature
Algorithm 2 Pattern extraction from training data INPUT: training data D and dictionary V
OUTPUT: set of patterns, P = {(p, shift, t)}
1: P ←ˆ ϕ 2: Lmax = max(w,t)∈V len(w)
3: **for all** training examples (c, s = {(wl, tl)}
L
l=1) ∈ D do 4: i ← 0 5: for j = 0 to L do 6: shift = len(wj )
7: for k = shift to Lmax do 8: Pˆ[c i+k i][(shift, tj )] += 1 9: Pˆ[c i+k i;tj−1][(shift, tj )] += 1 10: i ← i + shift 11: *P ← {*(w, len(w),tˆ)} where (w, ∗) ∈ V, w ̸∈ Pˆ,
12: tˆ= argmax{t|(w,t)∈V}
Pw′ Pˆ[w
′][(len(w
′), t)]
13: **for all** pattern candidates p ∈ Pˆ from shortest one do 14: shift = argmaxshift Pt Pˆ[p][(shift, t)]
15: t = argmaxt Pˆ[p][shift, t)]
16: (shift′, t′) = longest_prefix_search(p, P)
17: if (shift, t) = (shift′, t′) **then**
18: *P ← P ∪ {*(p, shift, t)} 19: **return** P
weights in advance and retrieve those partial results by using simple keys such as word unigrams, POS
bigrams, and primitive feature sequences to compute the final results (labels) by an argmax operation on the weights. The proposed method regards word segmentation and tagging as a joint, multiclass classification problem and directly obtains the label (i.e., where to segment and what to tag) by using the feature sequence as a pattern, thus skipping the expensive argmax operation over a number of labels. The longest matching thus implies classification with as many features as possible.
## 2.2 Pattern Extraction From Data
Following the feature templates of learning-based methods (Kudo et al., 2004; Neubig et al., 2011),
the algorithm's pattern template was designed as a sequence of characters, c, followed by the previous word's POS tag tj−1, thus giving c;tj−1, where ';'
represents string concatenation.
Algorithm 2 is the procedure to extract patterns for word segmentation and POS tagging from the annotated data and a dictionary. Given training data D with annotation of (word) segmentations and (POS) tags and a dictionary V compiling words and their possible tags, the algorithm iteratively extracts possible patterns from D. It first enumerates surface patterns c i+k ifrom all starting positions of words in D, and it then concatenates them with tag tj−1 for the preceding words to form pattern candidates (Lines 3-10 in Algorithm 2). Patterns are added for dictionary words that are unseen in the
| KYOTO | KWDLC | | | | | |
|--------------|---------|-------|-------|--------|-------|-------|
| train | dev | test | train | dev | test | |
| # sentences | 35,478 | 1145 | 1783 | 12,271 | 1585 | 2195 |
| ave. # words | 25.37 | 26.24 | 25.83 | 15.85 | 14.27 | 16.34 |
Table 1: Statistics of the evaluation datasets.
training data (Lines 11-12). The segmentation offset (shift) and tag t for a pattern are determined by the frequency (Lines 14-15). To avoid extra matching to the posterior contexts and previous tag, we only keep patterns whose segmentation offsets and tags differ from those of the longest *prefix* patterns that share prefixes of posterior contexts (Lines 1618). This not only reduces the number and length of patterns but also minimizes the longest matching method's overhead for word segmentation.1
## 3 Experiments
This section describes an experimental evaluation of the pattern-based morphological analyzer on two annotated corpora in different domains (Kurohashi and Nagao, 2003; Hangyo et al., 2012). The method was compared with two learning-based baselines (Kudo et al., 2004; Neubig et al., 2011)
in terms of efficiency and accuracy. Note that all language resources and software used in the experiments are publicly available and free for academic use.
## 3.1 Setup
Data The experiments used the Kyoto-University Text Corpus2(KYOTO) (Kurohashi and Nagao, 2003), compiled from newspaper articles, and the Kyoto-University Web Document Leads Corpus3
(KWDLC) (Hangyo et al., 2012), compiled from the first three sentences of various Web pages. I
adopted the split of development and test sets given in the corpora's github repositories and used the remaining portions as training sets. The datasets' statistics are listed in Table 1.
Methods The three methods below were compared. To prevent overfitting, the hyperparameter C in the underlying model was tuned for the two learning-based baseline methods4 by using the development set to maximize the F1 of the POS tags.
| # words | # tags (four levels) | | | | | |
|-----------|------------------------|----|----|-----------|----|-------|
| 1 | 2 | 3 | 4 | all (1-4) | | |
| JUMAN 5.1 | 475,716 | 14 | 35 | 34 | 60 | 980 |
| JUMAN 7.0 | 702,358 | 14 | 35 | 33 | 77 | 1,188 |
Table 2: Statistics of the morphological dictionaries.
MeCab (ver. 0.996) is a C++ implementation of a search-based method (Kudo et al., 2004).5It enumerates possible segmentations and tags as word lattices by using a dictionary and performs Viterbi search by using unigram and bigram scores factorized from feature weights.
Vaporetto (ver. 0.6.2) is a Rust6implementation of a classification-based method (Neubig et al.,
2011).7It first performs word segmentation by classifying whether to segment after each character in the input, and it then identifies the resulting words' POS tags. It also trains classifiers for the possible POS tag sets of individual words, and it assigns the POSs of its first dictionary entries for words that are unseen in the training data.8 A morphological dictionary was used to extract word features.
Jagger is a C++ implementation of the proposed algorithm. It greedily applies patterns extracted from the training data and a dictionary to jointly segment words and assign tags. Appendices A
and B respectively describe the method to handle unknown words and the implementation details.
Jagger is more similar to Vaporetto than to MeCab but differs in that it jointly performs segmentation and tagging instead of using a two-step cascaded pipeline, and it uses patterns instead of classifiers to find labels (i.e., where to segment and what to tag). Appendix C compares Jagger with the other implementations.
Dictionaries As listed in Table 2, the experiments used two morphological dictionaries imported to MeCab from a manually tailored morphological analyzer, JUMAN.
9 Specifically, mecabjumandic-5.1-20070304 and mecab-jumandic-7.020130310 were compared to examine the impact of the dictionary's quality and size. The jumandic-
| KYOTO | time [s] ↓ speed [sent./s] ↑ space [MiB] ↓ | seg | top (level 1) | all (levels 1-4) | | |
|-------------------|----------------------------------------------|-----------|-----------------|---------------------|---------------------|---------------------|
| w/ jumandic-5.1 | | | | | | |
| MeCab | 26.83 | 66,455 | 55.81 | 98.68 (98.47/98.89) | 97.32 (97.12/97.53) | 95.97 (95.76/96.17) |
| Vaporetto | 15.14 | 117,767 | 658.80 | 98.94 (98.97/98.92) | 98.30 (98.32/98.27) | 96.92 (96.95/96.90) |
| Jagger (proposed) | 1.77 | 1,007,344 | 26.39 | 98.73 (98.62/98.83) | 97.62 (97.52/97.72) | 96.55 (96.45/96.65) |
| w/ jumandic-7.0 | | | | | | |
| MeCab | 29.99 | 59,453 | 77.98 | 98.37 (98.02/98.72) | 97.19 (96.84/97.54) | 96.10 (95.75/96.44) |
| Vaporetto | 16.93 | 105,316 | 828.85 | 99.08 (99.08/99.08) | 98.42 (98.42/98.43) | 97.05 (97.04/97.05) |
| Jagger (proposed) | 1.83 | 974,316 | 35.09 | 98.68 (98.51/98.86) | 97.63 (97.46/97.80) | 96.57 (96.74/96.40) |
| KWDLC | time [s] ↓ speed [sent./s] ↑ space [MiB] ↓ | seg | top (level 1) | all (levels 1-4) | | |
|-------------------|----------------------------------------------|-----------|-----------------|---------------------|---------------------|---------------------|
| w/ jumandic-5.1 | | | | | | |
| MeCab | 23.83 | 92,110 | 53.88 | 97.13 (96.82/97.44) | 95.62 (95.32/95.93) | 94.30 (94.00/94.60) |
| Vaporetto | 10.93 | 200,823 | 642.63 | 97.35 (97.39/97.32) | 96.16 (96.20/96.13) | 94.08 (94.11/94.04) |
| Jagger (proposed) | 1.44 | 1,524,305 | 28.89 | 97.17 (96.94/97.40) | 95.71 (95.49/95.94) | 94.20 (93.98/94.42) |
| w/ jumandic-7.0 | | | | | | |
| MeCab | 26.90 | 81,598 | 76.38 | 97.99 (97.82/98.16) | 96.66 (96.49/96.83) | 95.62 (95.45/95.78) |
| Vaporetto | 12.55 | 174,900 | 842.40 | 97.53 (97.58/97.49) | 96.39 (96.43/96.34) | 94.68 (94.72/94.63) |
| Jagger (proposed) | 1.46 | 1,503,424 | 40.22 | 97.60 (97.49/97.71) | 96.14 (96.04/96.25) | 94.63 (94.52/94.73) |
Table 3: F1 (precision/recall) results on KYOTO.
Table 4: F1 (precision/recall) results on KWDLC.
7.0 dictionary contains words extracted automatically from the Web (Murawaki and Kurohashi, 2008), comprising a larger number (702,358) than in jumandic-5.0 (475,716). The POS tags include four levels of hierarchical morphosyntactic information: (1) major POS (e.g., *noun* and *verb*); (2)
minor POS (e.g., *common noun*); (3) conjugation type (e.g., *ichidan verb*); and (4) conjugation form
(e.g., *irrealis*). For example, the POS tags of *shumi* and iru in Figure 1 are noun-*common_noun*-*-*
and *verb*-*-ichidan_verb-*terminal*, respectively.
Evaluation procedure The precision, recall, and F1 of the segmentation with various levels of POS
tags (Kudo et al., 2004) were used as metrics. As Vaporetto does not output lemmas, lemmatization was evaluated via the tagging results of the full POS
tag set ("all (levels 1-4)" in Tables 3 and 4), which included conjugation types and forms, given that Japanese words can be mapped to their lemmas according to their conjugation types and forms. I
processed 1000 copies of the test data and measured the time, speed, and maximum memory consumption three times with the /usr/bin/time -l command. The median values are reported here.
All experiments were done on an M2 MacBook Air with a 3.5-GHz CPU and 24-GB main memory.
## 3.2 Results
Tables 3 and 4 summarize the morphological analysis results on the KYOTO and KWDLC datasets.
The pattern-based method here, Jagger, was 16 and 7 times faster than MeCab and Vaporetto with 1/2 and 1/20 as much memory consumption, respectively, while achieving comparable accuracy.
Jagger is efficient because it does not have massive floating-point parameters, unlike other methods, and because it minimizes the number and length of patterns by pruning (Lines 16-18 in Algorithm 2).
As a result, the training took less than six seconds.
MeCab's accuracy depends on the dictionary: with jumandic-7.0, it worked best on KWDLC and worst on KYOTO. In contrast, Vaporetto's accuracy depends on the training data size. It worked best on KYOTO but was just as good as Jagger on KWDLC.
Below are the detailed results for Jagger with the jumandic-7.0 dictionary.
Comparison to neural methods Jagger was compared to a state-of-the-art neural method (Tolmachev et al., 2018), JUMAN++-V2, 10 which was trained on the same data with the official script and hyperparameters.11 Note that this comparison was **unfair** to Jagger in terms of accuracy and to JUMAN++-V2 in terms of efficiency, because JUMAN++-V2 uses 0.8 million additional dictionary entries from Wikipedia and a neural language model trained on 10 million sentences from the Web.
| time [s] ↓ speed [sent./s] ↑ space [MiB] ↓ | seg | top (level 1) | all (levels 1-4) | | | |
|----------------------------------------------|--------|-----------------|--------------------|---------------------|---------------------|---------------------|
| KYOTO | | | | | | |
| JUMAN++-V2 | 331.14 | 5384 | 300.80 | 99.37 (99.30/99.45) | 98.72 (98.65/98.80) | 97.74 (97.66/97.82) |
| Jagger (proposed) | 1.83 | 974,316 | 35.09 | 98.68 (98.51/98.86) | 97.63 (97.46/97.80) | 96.57 (96.74/96.40) |
| KWDLC | | | | | | |
| JUMAN++-V2 | 283.11 | 7753 | 290.05 | 98.37 (98.25/98.50) | 97.61 (97.49/97.73) | 96.42 (96.30/96.55) |
| Jagger (proposed) | 1.46 | 1,503,424 | 40.22 | 97.60 (97.49/97.71) | 96.14 (96.04/96.25) | 94.63 (94.52/94.73) |
Table 5: F1 (precision/recall) comparison with JUMAN++.
| time [s] ↓ | speed [sent./s] ↑ | space [MiB] ↓ | seg | top (level 1) | all (levels 1-4) |
|-------------------------------|---------------------|-----------------|--------|-------------------------------|--------------------|
| KYOTO | | | | | |
| MeCab | 28.53 | 62,495 | 40.52 | | |
| Vaporetto | 4.87 | 366,119 | 283.49 | | |
| Jagger (proposed) | 1.41 | 1,264,539 | 21.05 | | |
| KWDLC | | | | | |
| MeCab | 25.70 | 85,408 | 39.59 | | |
| Vaporreto | 4.87 | 366,119 | 283.49 | | |
| Jagger (proposed) | 1.13 | 1,942,477 | 20.16 | training: KWDLC → test: KYOTO | |
| MeCab | 97.90 | 96.56 | 94.82 | | |
| Vaporetto | 95.76 | 93.81 | 91.31 | | |
| Jagger (proposed) | 97.25 | 95.42 | 93.30 | | |
| training: KYOTO → test: KWDLC | | | | | |
| MeCab | 97.78 | 96.02 | 94.48 | | |
| Vaporetto | 97.05 | 95.15 | 92.72 | | |
| Jagger (proposed) | 97.22 | 95.01 | 93.12 | | |
Table 7: F1 results for cross-domain evaluation.
Table 5 summarizes the comparison between Jagger and JUMAN++-V2. Although JUMAN++-
V2 was reported to speed up JUMAN++ (Morita et al., 2015) by a factor of 250, Jagger was faster than JUMAN++-V2 by a factor of 180 with 1/7 as much of a memory footprint. JUMAN++-V2 was more accurate than Jagger, but the gain was less than 1% for word segmentation. If external text could be used, this gap could be reduced with a technique called structure compilation (Liang et al.,
2008), which runs JUMAN++-V2 on external text to extract patterns. That idea is beyond this paper's scope but important for future work.
Word segmentation efficiency Because of different approaches to handling unknown words and supporting lemmatization, it is difficult to compare Vaporetto with Jagger and MeCab as a morphological analyzer in a strictly fair manner. Instead, the word segmentation efficiency was compared, as summarized in Table 6. Here, Vaporetto was trained to perform only word segmentation by using the dictionary and the training data without POS
tags. Jagger was faster and more space-efficient than Vaporetto, even taking the overhead of loading large models (1.7 seconds) into account.
Cross-domain evaluation Lastly, Table 7 lists the results for cross-domain evaluation. Vaporetto's accuracy became much worse, indicating that the classification-based method was prone to overfitting to the training domain. The proposed method enjoys the benefits of the dictionary and training data: it can change its behavior by adding not only dictionary entries but also patterns.
## 4 Conclusions
This study sought to improve the accuracy of speedoriented, pattern-based methods for Japanese morphological analysis, rather than improving the speed of accuracy-oriented neural models. The proposed method extracts POS-augmented patterns from a morphological dictionary and annotated data. Experimental results on two standard datasets confirmed that this method achieves accuracy comparable to that of learning-based methods, with a very fast throughput of over 1,000,000 sentences per second on a laptop.
I plan to apply this approach to other languages and even to other NLP tasks by discretizing the continuous representations induced by neural models to obtain patterns. The source code is released with GPL, LGPL, and 2-clause BSD licenses.
Message to researchers Because the accuracies on NLP benchmark datasets are becoming saturated with a larger foundation model, researchers may want to set diverse goals based on underrepresented metrics besides accuracy (*e.g.*, efficiency). I
hope that this study will initiate *serious* research on speed-intensive approaches to NLP that can meet industry demands and enable researchers with limited computational resources to exert their ability.
## 5 Limitations
This evaluation had two limitations. First, although the method is not language-dependent, it was evaluated on a single language, Japanese. It would be worthwhile to evaluate the method on other languages to examine the approach's versatility. Second, the method uses dictionaries to obtain patterns.
Although Japanese morphological analysis commonly uses dictionaries to perform lemmatization, it would be worthwhile to evaluate the method with only training data or dictionaries derived from text.
Below, I discuss the current limitations for word segmentation, POS tagging, and lemmatization in detail.
Word segmentation The proposed method's accuracy of word segmentation will depend on the target language's typological factors (Shao et al.,
2018), such as the character set size, lexicon size, and average word length. Among those factors, the character set size will especially matter because the current patterns mostly comprise surface strings and are likely to suffer from data sparseness. It will thus be valuable to evaluate the method on Chinese, which has a larger character set than Japanese. It will also be important to evaluate the method on languages with different typological factors from Japanese, such as Hebrew and Finnish. The training data size will not matter if the method is used to approximate some existing resource-efficient method via structure compilation (Liang et al., 2008).
POS **tagging** Compared to word segmentation, POS tagging requires more complex and abstract feature sets that are tailored for the target language and POS tag set (Spoustová et al., 2009), which poses a challenge for the proposed method. The current pattern template is tailored for Japanese and the JUMAN POS tag set; hence, for other languages and POS tag sets, a pattern template will need to be designed by referring to the feature templates of existing learning-based methods for the target language and POS tag set. Because the method jointly solves word segmentation and POS tagging in a left-to-right manner, patterns cannot leverage certain abstract features from posterior contexts of the target word (*e.g.*, the next word's suffix). For application to other languages, it would be worthwhile to explore not only left-to-right processing but also right-to-left processing and a cascaded pipeline approach.
Lemmatization The approach here currently requires a morphological dictionary with lemmas or a fine-grained POS tag set that includes conjugation types and forms to perform lemmatization. Because lemma generation rules for other languages can be induced from lemma-annotated datasets (Straka, 2018), the method could be applied to other languages by using such lemma generation rules as the target labels for classification.
Challenging target languages include morphologically rich languages such as Arabic and Czech.
## 6 Ethics Statement
I am not aware of any specific social risks that this work directly creates or exacerbates. However, because morphological analysis is a core text processing function used in various NLP applications, those who attempt to abuse NLP applications may benefit from the proposed method's efficiency.
## Acknowledgements
This work was partially supported by JSPS KAKENHI Grant Number JP21H03494 and by JST,
CREST Grant Number JPMJCR19A4, Japan. I
thank Koichi Akabe for showing implementations of assigning POSs to unknown words in Vaporetto, Keiji Shinzato for his comments on an early draft of this paper, and Manabu Sassano for useful discussions on the future of speed-intensive NLP. Finally, I thank the anonymous reviewers for their encouraging comments on the paper's goal.
## References
Jun'ichi Aoe. 1989. An efficient digital search algorithm by using a double-array structure. *IEEE Transactions on Software Engineering*, 15(9):1066–1077.
Masatsugu Hangyo, Daisuke Kawahara, and Sadao Kurohashi. 2012. Building a diverse document leads corpus annotated with semantic relations. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation, pages 535–544, Bali, Indonesia.
Julia Hockenmaier and Chris Brew. 1998. Error-driven learning of Chinese word segmentation. In Proceedings of the 12th Pacific Asia Conference on Language, Information and Computation, pages 218–229, Singapore.
Nobuhiro Kaji, Yasuhiro Fujiwara, Naoki Yoshinaga, and Masaru Kitsuregawa. 2010. Efficient staggered decoding for sequence labeling. In *Proceedings of*
the 48th Annual Meeting of the Association for Computational Linguistics, pages 485–494, Uppsala, Sweden.
Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto.
2004. Applying conditional random fields to Japanese morphological analysis. In *Proceedings of* the 2004 Conference on Empirical Methods in Natural Language Processing, pages 230–237, Barcelona, Spain.
Sadao Kurohashi and Makoto Nagao. 2003. Building a japanese parsed corpus. In Anne Abeillé, editor, *Treebanks: Building and Using Parsed Corpora*, pages 249–260. Springer Netherlands, Dordrecht.
Percy Liang, Hal Daumé, and Dan Klein. 2008. Structure compilation: Trading structure for features. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, page 592–599, New York, NY, USA. Association for Computing Machinery.
Hiroshi Maruyama. 1994. Backtracking-free dictionary access method for Japanese morphological analysis.
In COLING 1994 Volume 1: The 15th International Conference on Computational Linguistics, Kyoto, Japan.
Hajime Morita and Tomoya Iwakura. 2019. A fast and accurate partially deterministic morphological analysis. In *Proceedings of the International Conference* on Recent Advances in Natural Language Processing (RANLP 2019), pages 804–809, Varna, Bulgaria.
INCOMA Ltd.
Hajime Morita, Daisuke Kawahara, and Sadao Kurohashi. 2015. Morphological analysis for unsegmented languages using recurrent neural network language model. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language* Processing, pages 2292–2297, Lisbon, Portugal.
Yugo Murawaki and Sadao Kurohashi. 2008. Online acquisition of Japanese unknown morphemes using morphological constraints. In *Proceedings of the* 2008 Conference on Empirical Methods in Natural Language Processing, pages 429–437, Honolulu, Hawaii.
Masaaki Nagata. 1994. A stochastic Japanese morphological analyzer using a forward-DP backward-A*
n-best search algorithm. In *COLING 1994 Volume 1:*
The 15th International Conference on Computational Linguistics, Kyoto, Japan.
Graham Neubig, Yosuke Nakata, and Shinsuke Mori.
2011. Pointwise prediction for robust, adaptable Japanese morphological analysis. In *Proceedings* of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 529–533, Portland, Oregon, USA.
David D. Palmer. 1997. A trainable rule-based algorithm for word segmentation. In *35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of* the Association for Computational Linguistics, pages 321–328, Madrid, Spain.
Manabu Sassano. 2014. Deterministic word segmentation using maximum matching with fully lexicalized rules. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers, pages 79–83, Gothenburg, Sweden.
Yan Shao, Christian Hardmeier, and Joakim Nivre. 2018.
Universal word segmentation: Implementation and interpretation. *Transactions of the Association for* Computational Linguistics, 6:421–435.
Drahomíra "johanka" Spoustová, Jan Hajic, Jan Raab, ˇ
and Miroslav Spousta. 2009. Semi-supervised training for the averaged perceptron POS tagger. In *Proceedings of the 12th Conference of the European* Chapter of the ACL (EACL 2009), pages 763–771, Athens, Greece. Association for Computational Linguistics.
Milan Straka. 2018. UDPipe 2.0 prototype at CoNLL
2018 UD shared task. In Proceedings of the CoNLL
2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197–207, Brussels, Belgium. Association for Computational Linguistics.
Arseny Tolmachev, Daisuke Kawahara, and Sadao Kurohashi. 2018. Juman++: A morphological analysis toolkit for scriptio continua. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 54–59, Brussels, Belgium.
Marcos Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro H. Martins, André F. T. Martins, Peter Milder, Colin Raffel, Edwin Simpson, Noam Slonim, Niranjan Balasubramanian, Leon Derczynski, and Roy Schwartz. 2022.
Efficient methods for natural language processing: A
survey. *CoRR*, arXiv:2209.00099.
Naoki Yoshinaga and Masaru Kitsuregawa. 2009. Polynomial to linear: Efficient classification with conjunctive features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1542–1551, Singapore.
Naoki Yoshinaga and Masaru Kitsuregawa. 2010. Kernel slicing: Scalable online training with conjunctive features. In *Proceedings of the 23rd International* Conference on Computational Linguistics (Coling 2010), pages 1245–1253, Beijing, China. Coling 2010 Organizing Committee.
Naoki Yoshinaga and Masaru Kitsuregawa. 2014. A
self-adaptive classifier for efficient text-stream processing. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1091–1102, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.
| KYOTO | time [s] ↓ | speed [sent./s] ↑ | space [MiB] ↓ | | | | |
|-------------------|--------------|---------------------|-----------------|-------|------------|-------------------|---------------|
| w/ jumandic-5.1 | | | | | | | |
| MeCab | 26.83 | 66,455 | 55.81 | | | | |
| Vibrato | 12.47 | 142,983 | 97.75 | | | | |
| Vaporetto | 15.14 | 117,767 | 658.80 | | | | |
| Jagger (proposed) | 1.77 | 1,007,344 | 26.39 | | | | |
| w/ jumandic-7.0 | | | | | | | |
| MeCab | 29.99 | 59,453 | 77.98 | | | | |
| Vibrato | 16.01 | 111,367 | 164.20 | | | | |
| Vaporetto | 16.93 | 105,316 | 828.85 | | | | |
| Jagger (proposed) | 1.83 | 974,316 | 35.09 | KWDLC | time [s] ↓ | speed [sent./s] ↑ | space [MiB] ↓ |
| w/ jumandic-5.1 | | | | | | | |
| MeCab | 23.83 | 92,110 | 53.88 | | | | |
| Vibrato | 11.51 | 190,703 | 97.92 | | | | |
| Vaporetto | 10.93 | 200,823 | 642.63 | | | | |
| Jagger (proposed) | 1.44 | 1,524,305 | 28.89 | | | | |
| w/ jumandic-7.0 | | | | | | | |
| MeCab | 26.90 | 81,598 | 76.38 | | | | |
| Vibrato | 15.01 | 146,235 | 163.99 | | | | |
| Vaporetto | 12.55 | 174,900 | 842.40 | | | | |
| Jagger (proposed) | 1.46 | 1,503,424 | 40.22 | | | | |
Table 8: Efficiency of morphological analysis on KY-OTO; results other than for Vibrato are from Table 3.
Table 9: Efficiency of morphological analysis on KWDLC; results other than for Vibrato are from Table 4.
## A Handling Of Unknown Words
Words that appear in neither the dictionary nor the training data matter in both the proposed method and search-based morphological analysis. Here, a common method (Kudo et al., 2004) was used to segment unknown words. Specifically, characters
(and words) with the same character types, numbers, letters, or katakana were concatenated, with the concatenation restricted for katakana words when the total length of two katakana words exceeded a specific length (here, 18 bytes). The POS
tags of concatenated unknown words were determined from a pattern based on the previous POS
tag and the last concatenated word.
## B Implementation Details
Implementation techniques used in the existing efficient implementations of Japanese morphological analyzers were leveraged to implement Jagger.
As in MeCab, memory-mapped I/O was adopted to reduce the memory footprint, and outputs are generated by referring to strings in the in-memory dictionary while avoiding dynamic memory allocation. To maintain patterns, I used a characterwise, double-array trie that was adopted in Vaporetto and Vibrato.12 To implement it, I modified an implementation of a byte-wise, double-array trie (Yoshinaga and Kitsuregawa, 2014), cedar.13 The character-wise, double-array trie uses UTF-8 characters as atomic transition labels instead of UTF-8 bytes, which reduces the number of random accesses in traversing Japanese multi-byte characters. For the trie transition, UTF-8 characters in the training data are counted to obtain cache-
| time [s] ↓ | speed [sent./s] ↑ | space [MiB] ↓ | |
|-------------------|---------------------|-----------------|--------|
| KYOTO | | | |
| MeCab | 28.53 | 62,495 | 40.52 |
| Vibrato | 14.69 | 121,375 | 163.92 |
| Vaporetto | 4.87 | 366,119 | 283.49 |
| Jagger (proposed) | 1.41 | 1,264,539 | 21.05 |
| SentencePiece | 16.63 | 107,215 | 9.02 |
| UTF-8 split | 0.31 | 5,751,612 | 1.55 |
| KWDLC | | | |
| MeCab | 25.70 | 85,408 | 39.59 |
| Vibrato | 13.94 | 157,460 | 164.30 |
| Vaporreto | 4.87 | 366,119 | 283.49 |
| Jagger (proposed) | 1.13 | 1,942,477 | 20.16 |
| SentencePiece | 14.54 | 150,962 | 9.05 |
| UTF-8 split | 0.27 | 8,129,629 | 1.55 |
Table 10: Efficiency of word segmentation (tokenization); some results are from Table 6.
friendly, frequency-based IDs for the UTF-8 characters. These implementation tricks provided a total speed-up factor of at most two.
Note that block I/O, which outputs results with a fixed large size (256 KiB in these experiments), is crucial to maintain the method's very fast throughput when lengthy POS tags and lemmas are output.
The use of strcpy and strlen should be strictly avoided in formatting the output because they incur extra search for the terminal symbol \0.
## C **Comparison To Other Implementations**
I also compared Jagger with Vibrato (ver. 0.5.0),12 which is a recent Rust reimplementation of MeCab by the developer of Vaporetto, and SentencePiece
(ver. 0.1.99),14 which is an unsupervised text tokenizer for neural generation. SentencePiece was trained with the default options (vocabulary size of 8K) on the same training data.
Tables 8 and 9 summarize the efficiency of morphological analysis and Table 10 summarizes the ef-
14https://github.com/google/sentencepiece
ficiency of word segmentation (tokenization) with the jumandic-7.0 dictionary. Although Vibrato is twice as fast as MeCab and shows comparable speed to Vaporetto for morphological analysis, Jagger is even faster and is more space-efficient than Vibrato. Jagger's throughput is on the same order as that of UTF-8 split, which simply looks at the first bytes (byte lengths) of UTF-8 characters to segment inputs into characters. Note that SentencePiece's small memory consumption is due to its small vocabulary size of 8K: it requires more memory for a larger vocabulary.
Finally, it is noteworthy that the degree to which the processing speed is affected by the morphological dictionary's size varies from one implementation to another (Tables 8 and 9). Vibrato is the most affected by the dictionary size, whereas Jagger is the least affected.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
5
✓ A2. Did you discuss any potential risks of your work?
6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
4
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
The statistics of the evaluation datasets (the number of sentences and average number of words per sentence in train/test/dev splits)
## C ✓ **Did You Run Computational Experiments?** 3
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Our pattern-based method has no fluctuation in results. The other non-neural methods compared in the main paper use convex optimization.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
kim-etal-2023-transformed | Transformed Protoform Reconstruction | https://aclanthology.org/2023.acl-short.3 | Protoform reconstruction is the task of inferring what morphemes or words appeared like in the ancestral languages of a set of daughter languages. Meloni et al (2021) achieved the state-of-the-art on Latin protoform reconstruction with an RNN-based encoder-decoder with attention model. We update their model with the state-of-the-art seq2seq model: the Transformer. Our model outperforms their model on a suite of different metrics on two different datasets: their Romance data of 8,000 cognates spanning 5 languages and a Chinese dataset (Hou 2004) of 800+ cognates spanning 39 varieties. We also probe our model for potential phylogenetic signal contained in the model. Our code is publicly available at \url{https://github.com/cmu-llab/acl-2023}. |
## Transformed Protoform Reconstruction
Young Min Kim∗and **Kalvin Chang**∗and **Chenxuan Cui** and **David Mortensen**
Language Technologies Institute, Carnegie Mellon University
{youngmik, kalvinc, cxcui, dmortens}@cs.cmu.edu
## Abstract
Protoform reconstruction is the task of inferring how morphemes or words sounded in ancestral languages of a set of daughter languages. Meloni et al. (2021) achieved the stateof-the-art on Latin protoform reconstruction with an RNN-based encoder-decoder with attention model. We update their model with the state-of-the-art seq2seq model—the Transformer. Our model outperforms their model on a suite of different metrics on two different datasets: Meloni et al.'s Romance data of 8,000+ cognates (spanning 5 languages) and a Chinese dataset (Hóu, 2004) of 800+ cognates
(spanning 39 varieties). We also probe our model for potential phylogenetic signal contained in the model. Our code is publicly available 1.
## 1 Introduction
Languages change over time and sometimes diverge into multiple daughter languages. The common ancestor of a set of genetically related languages is their proto-language. While there are proto-languages such as Latin that are attested, they are the exception2. Reconstructed words and morphemes in proto-languages are called protoforms. The task of reconstructing unattested protolanguages is called protoform reconstruction.
Historical linguists reconstruct proto-languages by identifying systematic sound changes that can be inferred from correspondences between attested daughter languages (see Table 1). They compare the sounds between a set of cognates, or words with a common ancestor, to develop hypotheses about the types and chronologies of sound changes.
∗Equal contribution 1https://github.com/cmu-llab/acl-2023 2In fact, the proto-language from which Romance languages like Spanish and Italian are descended is not identical to Classical Latin but is, rather, a closely related and sparsely attested language sometimes called Proto-Romance or Vulgar Latin.
| 'tooth' | 'two' | 'ten' | | |
|-----------|---------|---------|--------|----|
| English | tooth | two | ten | t |
| Dutch | tand | twee | tien | t |
| German | Zahn | zwei | zehn | z |
| PWG | *tanþ | *twai- | *tehun | *t |
Table 1: Sound correspondences in West Germanic Languages and Proto-West-Germanic (PWG).
This task is inherently data-constrained, especially for under-documented languages. Such data scarcity makes it a particularly difficult task for contemporary neural network architectures such as the Transformer (Vaswani et al., 2017), which are data hungry.
The contributions of this paper are as follows:
- Application of the Transformer architecture to the protoform reconstruction task, achieving state of the art performance, contrary to expectation.
- Expansion of prior digital versions of Hóu
(2004)'s Chinese dataset to include a total of 804 cognate sets across 39 modern varieties and Middle Chinese.
## 2 Related Work
Applying machine learning to protoform reconstruction is not new. Bouchard-Côté et al. (2013)
learn an unsupervised protoform reconstruction model for the large Oceanic language family using Monte Carlo Expectation Maximization (Dempster et al., 1977; Bouchard-Côté et al., 2008), supervising the model with a gold phylogeny and using a probabilistic, generative model of sound change. He et al. (2022) modernize an earlier version of Bouchard-Côté et al. (2013)'s model with RNNs for a 4 language subset of Romance, but they rely on a bigram language model of Latin, making their model technically not unsupervised.
24 List et al. (2022) apply an SVM classifier to supervised reconstruction by treating sound correspondences as training examples. Note that there were no word boundaries in the input matrix; that is, all sound correspondences across the training set are flattened into one matrix. Furthermore, each language has an independent phonemic inventory. To learn contextual information, the authors experiment with adding features encoding the position of phonemes, among others.
Ciobanu and Dinu (2018) learn a conditional random field (Lafferty et al., 2001) using n-gram features for supervised reconstruction and ensemble 5 daughter-to-protoform models. They use a dataset of 3,218 complete cognate sets spanning Latin (the proto-language) and 5 Romance languages: Romanian, French, Italian, Spanish, Portuguese.
Meloni et al. (2021) employ a GRU-based seq2seq approach (Cho et al., 2014) to Latin protoform reconstruction and achieve state-of-theart character edit distances. They extend Dinu and Ciobanu (2014)'s Romance data using data from Wiktionary—for a total of 8,799 cognate sets across 5 Romance languages plus Latin—in both orthographic and phonetic (IPA) representations.
In their model, all entries comprising the cognate set are concatenated together in a fixed order to form a training example. Chang et al. (2022) applied Meloni et al. (2021)'s architecture to the reconstruction of Middle Chinese on a dataset of 5000+ cognate sets spanning 8 languages they compiled from Wiktionary. 3 Fourrier (2022) compares statistical machine translation, RNN, and Transformer architectures for protoform reconstruction, but they evaluate their results using BLEU scores (Papineni et al.,
2002) instead of edit distance. They find that their Transformer model did not outperform the RNN
models on protoform reconstruction. In addition, their multilingual NMT (neural machine translation) model predicts many languages instead of one target language and is trained on bilingual pairs for protoform reconstruction (e.g. ItalianLatin and Spanish-Latin), unlike comparative reconstruction. In contrast, we encode the entire cognate set consisting of multiple daughter languages
(5 for the Romance dataset; 39 for Chinese) and predict the corresponding protoform.
3The original dataset contains 21,000 cognate sets, but only 5000+ had at least 3 daughter entries and were used as input to the model.
## 3 Datasets
We train and test our model on Romance and Sinitic (Chinese) language datasets. For Romance languages, we use Meloni et al. (2021)'s dataset which consists of 8,799 cognate sets of Romanian, French, Italian, Spanish, Portuguese words and the corresponding Latin form (approximately, a protoform). There are two versions of this dataset: phonetic and orthographic. The phonetic dataset
(Rom-phon) represents words with IPA symbols whereas the orthographic dataset (Rom-orth) represents words in the orthographic form of each language. We preserved all diacritics, except for vowel length. This dataset is an extension of Dinu and Ciobanu (2014)'s original dataset of 3,218 cognate sets, which is not publicly available. Refer to Table 2 for more information.
## 3.1 Expanding Digital Versions Of Hóu **(2004)**
For Sinitic languages, we created a dataset of Middle Chinese and its modern daughter languages.
Middle Chinese is an unattested language, and we thus have to rely on Baxter and Sagart (2014)'s reconstructions of forms corresponding to 4,967 Chinese characters. We scraped Wiktionary to obtain Hóu (2004)'s phonetic representations of their modern reflexes.4 The resulting dataset contains 804 cognate sets of 39 modern Sinitic languages and the corresponding reconstructed Middle Chinese word. List (2021)'s version previously had 894 cognate sets across 15 varieties.
## 4 Model
We propose a Transformer-based encoder-decoder architecture (Vaswani et al., 2017) because such models have produced state-of-the-art results on many sequence processing tasks. Transformers are by reputation data hungry, though, which poses a challenge to our problem setting, where the number of available training examples is often very small.
![2_image_0.png](2_image_0.png)
We modify the standard encoder-decoder architecture to accommodate the structure of our datasets, where multiple daughter sequences correspond to a single protoform sequence. Like Meloni et al. (2021), the daughter sequences are concatenated into a single sequence before being fed into the encoder. Because we only care about the relative position between tokens within each daughter sequence but not across daughter sequences, positional encoding is applied to each individual daughter sequence before concatenation. Along with positional encoding, an additive language embedding is applied to the token embeddings to differentiate between input tokens of different daughter languages.
## 5 Experiments 5.1 Baselines
We compare our Transformer model to a variety of baselines. For Meloni et al. (2021), we use Chang et al. (2022)'s PyTorch re-implementation and reran a Bayesian hyperparameter search using WandB (Biewald, 2020) to ensure a more fair comparison (since our model is tuned with WandB
as well). We also include the random daughter
(randomly designate a daughter form as the protoform and assume no sound change) and the majority constituent baselines (predict the most common phoneme in each syllable constituent) from Chang et al. (2022). For the SVM and CoRPaR classifiers (List et al., 2022), we experiment with different contextual features, such as Pos (position),
Str (prosodic structure), and Ini (whether or not the phoneme appears word-initially or word-finally).
We publish results on Meloni et al. (2021)'s full set of 8,799 cognates but cannot redistribute this set due to Dinu and Ciobanu (2014)'s restrictions. For reproducibility, we include results on Meloni et al. (2021)'s public subset of 5,419 cognates in the Appendix (Table 7), both of which include vowel length. Observe that these results are worse than those obtained on the full set, suggesting that the RNN and Transformer are dependent on a wealth of training data.
## 5.2 Preprocessing
In all our datasets, we merge diacritics to their base segments to form a multi-character token. For instance, the sequence [t, ʰ] is concatenated to [tʰ].
This ensures that phonemes are treated as one token. For Chinese, tone contours (a sequence of tones) are treated as one token. When multiple pronunciation variants are listed for a single Chinese character, we arbitrarily pick the first one.
## 6 Results And Discussion 6.1 Evaluation Criteria
We evaluate the predicted protoforms using edit distance (Levenshtein et al., 1966), normalized edit distance (edit distance normalized by the length of the target) and accuracy (the percentage of protoforms that are reconstructed without any mistakes). Like Chang et al. (2022), we also use feature error rate calculated using articulatory feature vectors from PanPhon (Mortensen et al.,
2016) because it reflects the phonetic similarity between the prediction and the gold protoform. For datasets with phonetic transcriptions (Romancephonetic and Chinese), we use phoneme edit distance and normalized phoneme edit distance. As List (2019) suggests, we use B-Cubed F Scores
(Amigó et al., 2009) to capture the structural similarity between the gold and predicted protoforms
(0: structurally dissimilar, 1: similar). With the exception of character and phoneme edit distance, the metrics enable fair comparison across different language families, which will differ in the average word length.
## 6.2 Results
Table 3 shows that our model consistently has the best performance on all datasets with regards to most metrics. The results were averaged across 5 runs. Out of all datasets, our model performs best on the Rom-orth dataset, where we achieve a 7.0%
| Language Family | Source | # varieties | Cognate sets | Proto-language |
|----------------------|--------------------------|---------------|----------------|------------------|
| Rom-phon | Dinu and Ciobanu (2014), | 5 | 8,799 | Latin |
| Meloni et al. (2021) | | | | |
| Rom-orth | Dinu and Ciobanu (2014), | 5 | 8,799 | Latin |
| Meloni et al. (2021) | | | | |
| Sinitic (Chinese) | Hóu (2004) | 39 | 804 | Middle Chinese |
Table 2: Statistics on both datasets used in our experiments. \# varieties refers to the number of daughter varieties.
![3_image_0.png](3_image_0.png)
decrease in phoneme edit distance and a 1.43p.p improvement in accuracy relative to the RNN baseline. We observe the most dramatic performance difference with the RNN baseline on the Sinitic dataset: a 10.48% decrease in phoneme edit distance and a 5.47p.p increase in accuracy. For reproducibility, results on the publicly available portion of the Rom-phon and Rom-orth datasets are provided in Table 7 in the Appendix.
## 6.3 Analysis
We observe that the BCFS is relatively high for the Romance non-neural baselines compared to those of the Chinese ones. This suggests that the sound changes in the Romance datasets are more regular than that of Chinese, which corroborates List et al.
(2014)'s results that more than half of the Chinese characters in their dataset could not be explained by a tree model.
We examine the errors made by the Transformer model on the Rom-phon datasest. Substitutions constitute around 61% of the errors made by the Transformer; deletions, 21%, and insertions, 18%.
The highest number of substitution errors occur between [i, ɪ], [e, ɛ], [o, ɔ] and [u, ʊ]—vowel pairs that contrast only in tenseness. This is consistent with the analysis of Meloni et al. (2021), where substitutions between tense-lax vowel pairs take up the largest portion of errors.
We observe that other common substitution errors also happen between phonemes that share major phonetic features. This demonstrates that although no explicit phonetic information is fed directly into the model, the model makes mistakes motivated by phonetic similarity, like Meloni et al.
(2021).
We do not observe notable differences in the error statistics between the Transformer and the RNN.
## 6.4 Language Relatedness
Inspired by Fourrier (2022), we probe our model for diachronic information on how genetically related each Romance language is to each other. We create a distance matrix between every pair of languages in a dataset by taking the cosine similarity between a pair's language embeddings. We then use sklearn (Pedregosa et al., 2011)'s implementation of the Ward variance minimization algorithm (Ward Jr, 1963) to perform hierarchical clustering on the distance matrix. We take a consensus of the dendrograms from 5 different runs using the consense program from PHYLIP (Felsenstein, 2013).
As we see in Figure 2, the Transformer captures more of the phylogenetic relationships among the languages correctly for the Rom-phon dataset. Indeed, the Generalized Quartet Distance (GQD)
(Sand et al., 2013; Pompei et al., 2011; Rama et al.,
2018) between the gold and predicted tree, calculated using quartetDist from the tqDist library
(Sand et al., 2014), is 0.4 for the Transformer but 0.8 for the RNN. See Figure 5 in the Appendix for the results of the orthographic dataset.
| Dataset | Model | PED ↓ | NPED ↓ | Acc % ↑ | FER ↓ | BCFS ↑ |
|-------------------------------------------|--------------------------------|---------|----------|-----------|---------|----------|
| Sinitic | Random daughter (Chang et al., | 3.7702 | 0.8405 | 0% | 0.2893 | 0.2748 |
| 2022) Majority constituent (Chang et al., | 3.5031 | 0.7806 | 0% | 0.2013 | 0.3695 | |
| 2022) CorPaR (List et al., 2022) | 3.2795 | 0.7278 | 0% | 0.3972 | 0.3332 | |
| SVM + PosStr (List et al., 2022) | 1.6894 | 0.3692 | 15.52% | 0.1669 | 0.5418 | |
| RNN (Meloni et al., 2021) | 1.0671 | 0.2421 | 35.65% | 0.0899 | 0.6781 | |
| Transformer (present work) | 0.9553 | 0.2150 | 41.12% | 0.0842 | 0.7033 | |
| Rom-phon | Random daughter (Chang et al., | 6.1534 | 0.6914 | 0.06% | 0.6264 | 0.4016 |
| 2022) CorPaR + PosIni (List et al., 2022) | 1.6847 | 0.1978 | 22.18% | 0.0728 | 0.7403 | |
| SVM + PosStrIni (List et al., 2022) | 1.5787 | 0.1861 | 24.69% | 0.0713 | 0.7610 | |
| RNN (Meloni et al., 2021) | 0.9655 | 0.1224 | 52.31% | 0.0384 | 0.8296 | |
| Transformer (present work) | 0.8926 | 0.1137 | 53.75% | 0.0373 | 0.8435 | |
| Rom-orth | Random daughter (Chang et al., | 4.2567 | 0.4854 | 2.97% | - | 0.5147 |
| 2022) CorPaR + Ini (List et al., 2022) | 0.9531 | 0.1160 | 47.23% | - | 0.8400 | |
| SVM + PosStr (List et al., 2022) | 0.8988 | 0.1105 | 50.43% | - | 0.8501 | |
| RNN (Meloni et al., 2021) | 0.5941 | 0.0770 | 69.80% | - | 0.8916 | |
| Transformer (present work) | 0.5525 | 0.0720 | 71.23% | - | 0.9002 | |
Table 3: Evaluation of models and baselines using various metrics, averaged across 5 runs (same hyperparameters, different seeds). Because Rom-orth is not in IPA, character edit distance is used instead of PED, and we cannot accurately calculate FER. See Section 6.1 for an explanation of each evaluation metric. See Table 4 for the standard deviation values.
Since the Romance dataset only includes 5 daughter languages, our results are insufficient to corroborate or contradict Cathcart and Wandl
(2020)'s findings: the more accurate the protoforms, the less accurate the phylogeny will be. It is not clear if the model's language embeddings are learning information that reflects shared innovations (sound changes that if shared among a set of daughter languages, would be acceptable justification for grouping them)—the only acceptable criterion for phylogenetic inference in historical linguistics (Campbell, 2013)—or if the model is learning superficial phonetic similarity.
## 7 Conclusion
By showing that Transformers can outperform previous architectures in protoform reconstruction despite the inherent data scarcity of the task, our work motivates future research in this area to take full advantage of the recent advancements in the Transformer space.
Accurate supervised reconstruction can help predict protoforms for cognate sets where linguists have not reconstructed one yet. Future work could reconstruct proto-languages whose linguist reconstructions are not available, by transferring knowledge learned from languages with already reconstructed protoforms. Furthermore, future work can leverage the abundance of work in unsupervised NMT to adapt our Transformer model for the unsupervised setting, a more realistic scenario for the historical linguist.
## Limitations
One limitation of our work is that the RNN (Meloni et al., 2021) actually outperforms our Transformer on the Chinese dataset in Chang et al. (2022). In addition, as with other neural approaches, our model requires significant amounts of data, which is often not available to historical linguists researching less well-studied language families based on field reports. Romance and Chinese have relatively many cognate sets because the protoforms are documented5, but a low resource setup with 200 cognate sets would not fare well on our datahungrier Transformer model. Furthermore, concatenating the entire cognate set may not work on language families with hundreds of languages such as Oceanic because the input sequence would be too long compared to the output protoform sequence.
Finally, we obtain our Chinese gold protoforms from Baxter and Sagart (2014)'s Middle Chinese reconstruction, which was actually a transcription of the *Qieyun*, a rhyme dictionary. Norman and Coblin (1995) disagree with relying on such a philological source and prefer comparative reconstructions that begin from daughter data. However, there is no available comparative reconstruction of Middle Chinese with protoforms corresponding to thousands of characters to use as a gold standard. Be that as it may, it seems clear that Middle Chinese as recorded in the *Qieyun* is not identical to the most recent ancestor of the Chinese languages. Its preface concedes that it is a compromise between Tang Dynasty dialects. The situation with Romance is, in some ways, comparable.
Classical Latin—the variety on which we trainis not the direct ancestor of modern Romance languages. Instead, they are descended from Vulgar Latin or Proto-Romance, which is not well-attested and is primarily through graffiti and other informal inscriptions. Proto-Romance reconstructions are also not exhaustive. As a result, it is difficult to find a dataset like Meloni et al. (2021) with thousands of such ancestor forms. We are also limited to the faithfulness of espeak-ng's Latin G2P, from which Meloni et al. (2021) obtain their phonetic Romance dataset.
For most language families, protoforms are not attested. In fact, as the term is often used, protoform refers to a form that is inferred only through linguists' comparative method. We adopt the other usage for simplicity. In practice, our approach would require reconstructions made by a linguist to serve as training labels for cognate sets.
## Acknowledgements
We would like to thank Liang (Leon) Lu for finding a bug in our implementation, Ying Chen for writing the code for the baselines, and Brendon Boldt and Graham Neubig for providing useful feedback 5In the case of Chinese, only equivalence classes of pronunciations and not exact pronunciations are recorded.
## References
Enrique Amigó, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal constraints.
Information retrieval, 12(4):461–486.
William H Baxter and Laurent Sagart. 2014. *Old Chinese: A new reconstruction*. Oxford University Press.
Lukas Biewald. 2020. Experiment tracking with weights and biases. Software available from wandb.com.
Alexandre Bouchard-Côté, Dan Klein, and Michael Jordan. 2008. Efficient inference in phylogenetic indel trees. In *Advances in Neural Information Processing* Systems, volume 21. Curran Associates, Inc.
Alexandre Bouchard-Côté, David Hall, Thomas L. Griffiths, and Dan Klein. 2013. Automated reconstruction of ancient languages using probabilistic models of sound change. Proceedings of the National Academy of Sciences, 110(11):4224–4229.
Lyle Campbell. 2013. *Historical Linguistics: an Introduction*. Edinburgh University Press.
Chundra Cathcart and Florian Wandl. 2020. In search of isoglosses: continuous and discrete language embeddings in Slavic historical phonology. In *Proceedings of the 17th SIGMORPHON Workshop on* Computational Research in Phonetics, Phonology, and Morphology, pages 233–244, Online. Association for Computational Linguistics.
Kalvin Chang, Chenxuan Cui, Youngmin Kim, and David R. Mortensen. 2022. WikiHan: A new comparative dataset for Chinese languages. In *Proceedings of the 29th International Conference on Computational Linguistics (COLING 2022)*.
Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In *Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation*, pages 103–111, Doha, Qatar. Association for Computational Linguistics.
Alina Maria Ciobanu and Liviu P. Dinu. 2018. Ab initio: Automatic Latin proto-word reconstruction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1604–1614, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Arthur P Dempster, Nan M Laird, and Donald B Rubin.
1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1–22.
Liviu Dinu and Alina Maria Ciobanu. 2014. Building a dataset of multilingual cognates for the Romanian lexicon. In *Proceedings of the Ninth International* Conference on Language Resources and Evaluation
(LREC'14), pages 1038–1043, Reykjavik, Iceland.
European Language Resources Association (ELRA).
Joseph Felsenstein. 2013. Phylip (phylogeny inference package), version 3.695. Department of Genome Sciences, University of Washington, Seattle.
Clémentine Fourrier. 2022. *Neural Approaches to Historical Word Reconstruction*. Ph.D. thesis, Université PSL (Paris Sciences & Lettres).
Andre He, Nicholas Tomlin, and Dan Klein. 2022. Neural unsupervised reconstruction of protolanguage word forms. *arXiv preprint arXiv:2211.08684*.
侯精一 Jīngyī Hóu, editor. 2004. *Xiàndài Hànyǔ* fāngyán yīnkù 现代汉语方言音库 [Phonological database of Chinese dialects]. Shànghǎi Jiàoyù 上海教育, Shànghǎi 上海.
John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields:
Probabilistic models for segmenting and labeling sequence data. In *Proceedings of the Eighteenth International Conference on Machine Learning*, ICML
'01, page 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Vladimir I Levenshtein et al. 1966. Binary codes capable of correcting deletions, insertions, and reversals.
Soviet physics doklady, 10(8):707–710.
Johann-Mattis List. 2019. Beyond edit distances: Comparing linguistic reconstruction systems. *Theoretical Linguistics*, 45(3-4):247–258.
Johann-Mattis List. 2021. CLDF dataset derived from Hóu's "Phonological Database of Chinese Dialects" from 2004. Zenodo.
Johann-Mattis List, Robert Forkel, and Nathan Hill.
2022. A new framework for fast automated phonological reconstruction using trimmed alignments and sound correspondence patterns. In *Proceedings of* the 3rd Workshop on Computational Approaches to Historical Language Change, pages 89–96, Dublin, Ireland. Association for Computational Linguistics.
Johann-Mattis List, Nelson-Sathi Shijulal, William Martin, and Hans Geisler. 2014. Using phylogenetic networks to model chinese dialect history. *Language Dynamics and Change*, 4(2):222–252.
Carlo Meloni, Shauli Ravfogel, and Yoav Goldberg.
2021. Ab antiquo: Neural proto-language reconstruction. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4460–4473, Online. Association for Computational Linguistics.
David R. Mortensen, Patrick Littell, Akash Bharadwaj, Kartik Goyal, Chris Dyer, and Lori S. Levin.
2016. Panphon: A resource for mapping IPA segments to articulatory feature vectors. In *Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers*, pages 3475–3484.
Jerry L. Norman and W. South Coblin. 1995. A new approach to Chinese historical linguistics. *Journal* of the American Oriental Society, 115(4):576–584.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Simone Pompei, Vittorio Loreto, and Francesca Tria.
2011. On the accuracy of language trees. *PloS one*,
6(6):e20109.
Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard Jäger. 2018. Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics? *arXiv preprint* arXiv:1804.05416.
Andreas Sand, Morten K Holt, Jens Johansen, Rolf Fagerberg, Gerth Stølting Brodal, Christian NS Pedersen, and Thomas Mailund. 2013. Algorithms for computing the triplet and quartet distances for binary general trees. *Biology*, 2(4):1189–1209.
Andreas Sand, Morten Kragelund Holt, Jens Johansen, Rolf Fagerberg, Gerth Stølting Brodal, Thomas Mailund, and Christian N. S. Pedersen. 2014. tqdist:
A library for computing the quartet and triplet distances between binary or general trees. *BMC Bioinformatics*, yy(xx):ii–jj.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing systems*, 30.
Joe H Ward Jr. 1963. Hierarchical grouping to optimize an objective function. *Journal of the American statistical association*, 58(301):236–244.
## A Training
We split 70%, 10%, and 20% of our dataset into train, validation, and test sets, respectively. We conduct hyperparameter searches using WandB
(Biewald, 2020) and use early stopping, picking the epoch with lowest edit distance on validation data. All experiments are performed on a Ubuntu server with 4 GPUs and 20 CPUs. For both the RNN and the Transformer, Meloni et al. (2021)'s dataset takes less than 7 GPU hours to run, while Hóu (2004) takes less than 1 GPU hour. For the large Romance orthographic dataset, the RNN model has around 480,000 parameters, while the Transformer has around 800,000 parameters.
## B Hyper-Parameters
Refer to Table 5 and Table 6 for the best hyperparameters we found during hyperparameter search via WandB.
## C Supplementary Results
In order to compare our model to earlier work, we used the Rom-phon and Rom-orth datasets from Meloni et al. (2021). However, this set includes a subset from Ciobanu and Dinu (2018) which is not freely redistributable. So that our results can be reproduced, we also computed them on the publicly available subset of Meloni et al. (2021)'s dataset, which is presented in Table 7.
Phylogenetic trees for Chinese were also extracted from the RNN and Transformer models.
These are shown in Figures 3 and 4.
We also plot the dendrograms derived from the Rom-orto dataset in Figure 5.
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![9_image_0.png](9_image_0.png)
| Dataset | Model | PED ↓ | NPED ↓ | Acc % ↑ | FER ↓ | BCFS ↑ | | | | |
|------------------------------------------------------------------------------------------------------------------|---------|---------|----------|-----------|---------|----------|--------|----|--------|----|
| Sinitic | Random | 3.7702 | 0.8405 | 0% | 0.2893 | 0.2748 | | | | |
| daughter Majority | 3.5031 | 0.7806 | 0% | 0.2013 | 0.3695 | | | | | |
| constituent CorPaR | 3.2795 | 0.7278 | 0% | 0.3972 | 0.3332 | | | | | |
| SVM | 1.6894 | 0.3692 | 15.52% | 0.1669 | 0.5418 | | | | | |
| +PosStr RNN | 1.0671 | ± | 0.2421 | ± | 35.65% | ± | 0.0899 | ± | 0.6781 | ± |
| 0.0619 | 0.0140 | 1.60% | 0.0048 | 0.0174 | | | | | | |
| Transformer (present work) | 0.9553 | ± | 0.2150 | ± | 41.12% | ± | 0.0842 | ± | 0.7033 | ± |
| 0.0392 | 0.0075 | 2.3% | 0.0070 | 0.0087 | | | | | | |
| Rom-phon | Random | 6.1534 | 0.6914 | 0.06% | 0.6264 | 0.4016 | | | | |
| daughter CorPaR | 1.6847 | 0.1978 | 22.18% | 0.0728 | 0.7403 | | | | | |
| +PosIni SVM | 1.5787 | 0.1861 | 24.69% | 0.0713 | 0.7610 | | | | | |
| +PosStrIni RNN | 0.9655 | ± | 0.1224 | ± | 52.31% | ± | 0.0384 | ± | 0.8296 | ± |
| 0.0189 | 0.0022 | 0.63% | 0.0011 | 0.0029 | | | | | | |
| Transformer (present work) | 0.8926 | ± | 0.1137 | ± | 53.75% | ± | 0.0373 | ± | 0.8435 | ± |
| 0.0166 | 0.0017 | 0.40% | 0.0009 | 0.0026 | | | | | | |
| Rom-orth | Random | 4.2567 | 0.4854 | 2.97% | - | 0.5147 | | | | |
| daughter CorPaR | 0.9531 | 0.1160 | 47.23% | - | 0.8400 | | | | | |
| +Ini SVM | 0.8988 | 0.1105 | 50.43% | - | 0.8501 | | | | | |
| +PosStr RNN | 0.5941 | ± | 0.0770 | ± | 69.80% | - | 0.8916 | ± | | |
| 0.0100 | 0.0015 | ±0.22% | 0.0019 | | | | | | | |
| Transformer (present work) | - | 0.9002 | ± | | | | | | | |
| 0.5525 | ± | 0.0720 | ± | 71.23% | ± | | | | | |
| 0.0104 | 0.0017 | 0.52% | 0.0017 | | | | | | | |
| Table 4: Evaluation of models and baselines using various metrics, averaged across 5 runs (same hyperparameters, | | | | | | | | | | |
Table 4: Evaluation of models and baselines using various metrics, averaged across 5 runs (same hyperparameters, different seeds), with standard deviations. Because Rom-orth is not in IPA, character edit distance is used instead of PED, and we cannot accurately calculate FER. See Section 6.1 for an explanation of each evaluation metric.
| Romance (phon & orth) | Sinitic | |
|-------------------------|-----------|-----------|
| learning rate | 0.00013 | 0.0007487 |
| num_encoder_layers | 3 | 2 |
| num_decoder_layers | 3 | 5 |
| embedding size | 128 | 128 |
| n_head | 8 | 8 |
| dim_feedforward | 128 | 647 |
| dropout | 0.202 | 0.1708861 |
| training epochs | 200 | 200 |
| warmup epochs | 50 | 32 |
| weight decay | 0 | 0.0000001 |
| batch size | 1 | 32 |
Table 5: Hyper-parameters used in training the Transformer
| Romance-phon | Romance-orth | Sinitic | |
|--------------------|----------------|-----------|----------|
| learning rate | 0.00055739 | 0.000964 | 0.000864 |
| num_encoder_layers | 1 | 1 | 1 |
| num_decoder_layers | 1 | 1 | 1 |
| embedding size | 107 | 51 | 78 |
| hidden size | 185 | 130 | 73 |
| dim_feedforward | 147 | 111 | 136 |
| dropout | 0.1808 | 0.323794 | 0.321639 |
| training epochs | 181 | 193 | 237 |
| warmup epochs | 15 | 15 | 15 |
| batch size | 8 | 8 | 4 |
Table 6: Hyper-parameters used in training the RNN
Dataset Model **PED ↓ NPED ↓ Acc % ↑ FER ↓ BCFS ↑**
Rom-phon Random daughter (Chang et al.,
2022)
7.1880 0.8201 0% 1.1396 0.3406
CorPaR + Ini (List et al., 2022) 2.0885 0.2491 14.29% 0.0874 0.6799
SVM + PosStrIni (List et al., 2022) 1.9005 0.2276 17.05% 0.0883 0.7039 RNN (Meloni et al., 2021) 1.4581 0.1815 36.68 % 0.0592 0.7435
Transformer (present work) 1.2516 0.1573 41.38% 0.0550 0.7790
Rom-orth Random daughter (Chang et al.,
2022)
6.3272 0.6542 0.55% - 0.4023
CorPaR + PosStrIni (List et al.,
2022)
1.8313 0.2001 18.89% - 0.7227
SVM + PosStr (List et al., 2022) 1.6995 0.1867 21.66% - 0.7454 RNN (Meloni et al., 2021) 1.3189 0.1505 38.89% - 0.7742
Transformer (present work) 1.1622 0.1343 45.53% - 0.7989
Table 7: Evaluation of models and baselines with various metrics on Meloni et al. *(2021)'s Romance* datasets, where all entries from Dinu and Ciobanu (2014) are removed, for 1 run (using the hyperparameters of the best run on the full dataset)
Table 8: One cognate set, with Latin as the protoform and all columns to its right as the daughter cognates
| Latin | Romanian | French | Italian | Spanish | Portuguese |
|------------------|-------------|------------|---------------|-------------|--------------|
| [kɔlleːktɪoːnɛm] | [kolektsie] | [kɔlɛksjɔ̃] | [kolletsione] | [kolekθjon] | [kulɨsɐ̃ʊ̃] |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 8
✓ A2. Did you discuss any potential risks of your work?
Section 8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1 A4. Have you used AI writing assistants when working on this paper?
Not applicable. Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 3, 4
✓ B1. Did you cite the creators of artifacts you used?
Sections 3,4,5,6
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Sections 3, 5.1
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Table 2 and Appendix Section A
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Hyperparameter search: 5.1 Hyperparameter values: Appendix Section B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Table 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5.1, 6.1, 6.3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
hardt-2023-ellipsis | Ellipsis-Dependent Reasoning: a New Challenge for Large Language Models | https://aclanthology.org/2023.acl-short.4 | We propose a novel challenge for large language models: ellipsis-dependent reasoning. We define several structures of paired examples, where an ellipsis example is matched to its non-ellipsis counterpart, and a question is posed which requires resolution of the ellipsis. Test results show that the best models perform well on non-elliptical examples but struggle with all but the simplest ellipsis structures. |
## Ellipsis-Dependent Reasoning: A New Challenge For Large Language Models Daniel Hardt
Copenhagen Business School [email protected]
## Abstract
We propose a novel challenge for large language models: ellipsis-dependent reasoning.
We define several structures of paired examples, where an ellipsis example is matched to its non-ellipsis counterpart, and a question is posed which requires resolution of the ellipsis.
Test results show that the best models perform well on non-elliptical examples but struggle with all but the simplest ellipsis structures.
## 1 Introduction
Ellipsis is a fundamental feature of human language, occurring in all registers, where parts of sentences are omitted, although the missing parts are essential for understanding the meaning. The following is an example of Verb Phrase Ellipsis
(VPE)(Bos and Spenader, 2011):
(1) William went running. Harold did too.
(1) is understood as asserting that Harold went running; that is, the hearer or reader naturally fills in the missing material. This is done by identifying the antecedent VP, *went running*, in the first sentence. The following is the non-elliptical counterpart of (1):
(2) William went running. Harold went running too.
With such examples, we can test understanding of ellipsis by targeting the ellipsis phrase with a simple Yes/No question:
## (3) Did Harold Go Running?
If a system answers the question incorrectly for (1),
the ellipsis example, but answers correctly for (2),
the non-elliptical counterpart of (1), we can ascribe the result specifically to the challenge of ellipsis, since the examples are otherwise identical.
As with pronominal anaphora and other discourse processes, there is great flexibility in the way ellipsis can occur in discourse. Example (1)
involves two simple adjacent sentences. It is also possible for ellipsis or the antecedent to occur in embedded clauses. Furthermore, ellipsis can occur either before or after the antecedent. Finally, an arbitrary amount of material can intervene between the antecedent and the ellipsis occurrence.
In this paper, we propose the challenge of ellipsis-dependent reasoning. This challenge consists of examples involving an ellipsis clause, the target. Each ellipsis example is paired with its nonelliptical counterpart, where the target clause is overt rather than elliptical. We then pose a question whose answer is dependent on the target clause. A
key aspect of the challenge is that ellipsis occurrences are possible in a variety of diverse structural configurations. We test a series of GPT-3 models
(GPT) on several such ellipsis structures.
## 2 Related Work
There is a large literature concerning the probing of language models from a variety of perspectives. Furthermore, there has been substantial work specifically addressing ellipsis in NLP. In this paper, we are proposing the challenge of ellipsisdependent reasoning. This proposal builds on various strands of prior research; below we consider some particularly relevant aspects of this literature.
## 2.1 Probing Models For Knowledge
The Winograd Schema (Kocijan et al. (2022);
Levesque et al. (2012)) involves test examples that use the linguistic problem of pronoun resolution to gain insight into the commonsense reasoning abilities of an AI system. To do this, the Winograd Schema requires pairs of examples that differ only in one specific, small way, as in (4):
(4) The city councilmen refused the demonstrators a permit because they feared/advocated violence.
39 With "feared", the pronoun "they" refers to the city councilmen, while with "advocated", it refers to the demonstrators. Humans understand this because of general, commonsense knowledge about what would reasonably explain the refusal of a permit in the two cases. It is difficult to ensure that such examples can *only* be solved through such sophisticated reasoning, and, according to Kocijan et al.
(2022)[p. 8], "Solving Winograd schemas is not a surrogate for the ability to do commonsense reasoning".
A different approach is exemplified by Lin et al.
(2019): here, examples are constructed which test specific aspects of linguistic knowledge of a system, namely, whether BERT embeddings "encode hierarchical information". For example, a task is defined to identify the main auxiliary verb in a sentence, even in cases where the main auxiliary is not the first auxiliary verb to appear. Training and testing datasets are automatically generated using a context-free grammar for several such tasks involving hierarchical syntactic information.
## 2.2 Anaphora And Question Answering
Quoref (Dasigi et al. (2019); Zhang and Zhao
(2022)) is a question-answer dataset designed so that correct answers cannot be given unless a coreference relationship is correctly identified; that is, the reasoning involved in question answering is dependent on resolving coreference. This is, in a sense, the inverse of the Winograd schema, where resolving coreference is dependent upon reasoning.
Just as with the Winograd schema, it is difficult to ensure that resolving this dependency is required for system success. (Dasigi et al., 2019)[p. 1] note that this is "challenging, because it is hard to avoid lexical cues that shortcut complex reasoning", and based on a random sample, found that coreference resolution was required for 78% of questions.
## 2.3 Ellipsis As A Task
There has been substantial work on ellipsis as a discrete NLP task (Khullar (2020), Zhang et al.
(2019); Kenyon-Dean et al. (2016); Bos and Spenader (2011)). Vanderlyn et al. (2022) surveys a variety of forms of what they call "implicit reference", which includes ellipsis and related phenomena. Aralikatte et al. (2021) frame ellipsis as a question-answering task, i.e., a task of locating an antecedent, understood as a span of tokens in context. Aralikatte et al. (2021) report token F1 scores of 78.66 for VPE and 86.01 for sluicing, another form of ellipsis. It's important to note that the task here, of antecedent identification, is a sub-part of the ellipsis challenge. Before the antecedent is identified, an ellipsis occurrence must be identified, and after the antecedent is identified, it must be interpreted, or "reconstructed", at the ellipsis site.
## 2.4 Relevance For Ellipsis-Dependent Reasoning
The specific task of ellipsis is addressed in work like that of Aralikatte et al. (2021), but the key difference here is that we are probing for a complete solution to the ellipsis problem. The proposed ellipsis-dependent reasoning task involves a question that can only be answered correctly if the ellipsis is properly identified and interpreted. This combines aspects of the preceding works in a novel way: like the Winograd schema and the syntactic work by Lin et al. (2019), it probes for what we see as a specific type of psychologically-defined knowledge: namely, a representation of context that supports the resolution of ellipsis. Similarly to the work on Quoref, we use targeted questions to probe for discourse-related knowledge.
There is an extensive literature on the contextual interpretation of natural language, resting on the idea of a dynamic, ongoing model of discourse. For example, Discourse Representation Theory (Kamp, 1981) describes a semantic model supporting discourse phenomena such as pronominal and temporal anaphora, and Sag and Hankamer (1984) argue explicitly that ellipsis and other such phenomena are interpreted with respect to a discourse model
(Garnham, 2010). As one study puts it, "Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context" (Martin and McElree, 2008). In this paper, we seek to determine whether a large language model is capable of such an interpretive process.
## 3 Data
There is a great deal of variety in the structural configurations in which ellipsis can occur. In tables 1 and 2 we define structures for ellipsis and antecedent occurrences.
In all structures, there is an ellipsis occurrence, and the question targets the ellipsis occurrence. Furthermore, each ellipsis example is paired with a non-ellipsis version. The first two structures, Sep-
| Structure | Example | Structure | Example |
|-----------------------------------------|--------------------------------------------------------------------------------|-------------|-----------------------|
| Separate | William went running. | | |
| Sentence | John did too. | | |
| Conjoined | William went running, | | |
| Sentence | and John did too. | | |
| Subordinate | Because William went running, | | |
| Antecedent | John did. | | |
| Subordinate | William went running | | |
| VPE | after John did. | | |
| Backwards | Because John did, William went running. | | |
| Two Actions | William didn't go running but John did. William went shopping and John didn't. | Separate | William went running. |
| Sentence | But John didn't. | | |
| Conjoined | William went running, | | |
| Sentence | but John didn't. | | |
| Subordinate | Because William went running, | | |
| Antecedent | John didn't. | | |
| Subordinate | William went running | | |
| VPE | after John didn't. | | |
| Backwards | Because John didn't, William went running. | | |
| Two Actions | William didn't go shopping but John did. William went running and John didn't. | | |
| Question | Did John go running? | Question | Did John go running? |
| Table 1: Structures for Positive Answer | Table 2: Structures for Negative Answer | | |
arate Sentence, and Conjoined Sentence, involve two adjacent main clauses. This is followed by two structures in which either the VPE or antecedent occur in a subordinate clause. There is a Backwards structure where the VPE precedes the antecedent; here the VPE is in a subordinate clause. Finally, we have a Two Actions structure; that is, two ellipsis occurrences each with their respective antecedent VPs. We have two versions: one in which the target question has a correct answer of "Yes", shown in table 1, and another where the target question has a correct answer of "No", shown in table 2.
We generate large numbers of examples of each structure by performing random substitutions for both the subject and verb. The substitution lists are given in the appendix, along with samples of each structure and the size of the resulting sets. 1
## 4 Test
For each instantiation of a given structure, we produce paired ellipsis and non-ellipsis examples, with an associated Yes/No question. We randomly select 1000 examples for each structure, including 500 ellipsis examples and 500 examples which are their non-elliptical counterparts. Each example is presented to the system, preceded by the text, "Please give a Yes or No answer:". We test five GPT-3 models on these structures: Davinci-003, Davinci-002, Curie-001, Babbage-001, and Ada-001. According to the GPT-3 documentation, Davinci-003 is the most powerful model and Ada-001, the least.
## 5 Results
Figure 1 gives the accuracy for ellipsis and nonellipsis, for each of the five models. We have set up the test examples so that an ellipsis example is paired with a non-ellipsis example that is otherwise identical. Because of this, we claim that the difference in accuracy of the non-ellipsis case vs. the ellipsis case provides a measurement of the difficulty specifically posed by ellipsis. For all but the least powerful model, Ada, the non-ellipsis accuracy is substantially higher than ellipsis accuracy, supporting the hypothesis that ellipsis-dependent reasoning presents a difficult challenge for these models. While the Ada model actually performs somewhat better for ellipsis than non-ellipsis, this is not because the Ada model does well with ellipsis cases; rather, the model has great difficulty with both the ellipsis and non-ellipsis cases, and is close to a random guessing baseline of .50.
In figures 2 through 6, we present results for each model. We show the accuracy for each structure, for both the ellipsis version and the non-ellipsis version. Consider the most powerful models, Davinci003 and Davinci-002. In figures 2 and 3, we can see that ellipsis is not difficult in the first two structures:
2Sent (Separate Sentence) and 1Sent (Conjoined Sentence). Here the accuracy is nearly perfect for
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
the ellipsis as well as the non-ellipsis condition.
However, in all the other structures, there is a large divergence in accuracy between ellipsis and nonellipsis, for both the Davinci-003 and Davinci-002 models. Subordination for either antecedent or ellipsis is quite challenging, with accuracies ranging from 48.8 to 85.8. The Backwards and Two Actions structures are even more difficult for ellipsis.
## 6 Analysis
For the two most powerful models, it is clear that ellipsis poses a difficult challenge, except in the two simplest ellipsis structures. For the less powerful models, the picture is mixed. For these models, the non-ellipsis examples are themselves a difficult challenge, so we are not able to observe the specific difficulties posed by ellipsis.
As we can see in figure 1, the Davinci-002 model performs somewhat better overall than Davinci003, on both ellipsis and non-ellipsis. However, figures 2 and 3 show that the advantage of Davinci002 on ellipsis is exclusively due to the subordinate antecedent construction. In every other ellipsis structure, Davinci-003 performs better than Davinci-002.
There are striking differences in the distribution of errors. For both the Davinci-003 and Davinci002 models, errors are nearly always false negatives
- that is, incorrect "No" answers. There are virtually no false positives, either for the ellipsis case or nonellipsis case. For the other three models, there are many errors of each type, with a much higher ratio of false positives.
## 7 Conclusion
Most of the current rapid progress in NLP is due to pre-trained large language models. GPT-3 is an impressive publicly available collection of such
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
models, and is able to perform in a way that suggests human-level understanding. Because of this, it is important to explore areas in which it might still differ from human language understanding. In this paper we have argued that ellipsis is one such area. For many simple ellipsis structures, the most powerful GPT-3 models struggle, with accuracies far lower on ellipsis examples than on non-elliptical counterparts.
In many ways, GPT-3 appears to understand the texts that it processes, often being able to answer questions that appear to rely on sophisticated reasoning. However, the challenge of ellipsisdependent reasoning provides evidence that GPT-3 is not able to understand in anything like the way humans do.
## 8 Limitations
This paper argues that the proposed task of ellipsisdependent reasoning is a difficult challenge for GPT-3 models, which are among the most powerful current language models. The data constructed here is restricted to English, and furthermore is restricted to a single form of ellipsis, namely verb phrase ellipsis. It may well be that other forms of ellipsis may give rise to different effects, and it is also important to test the claims made here on other languages.
## References
OpenAI GPT-3 Models Overview. Accessed on 202301-10.
Rahul Aralikatte, Matthew Lamm, Daniel Hardt, and Anders Søgaard. 2021. Ellipsis resolution as question answering: An evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 810–817, Online. Association for Computational Linguistics.
Johan Bos and Jennifer Spenader. 2011. An annotated corpus for the analysis of VP ellipsis. Language resources and evaluation, 45(4):463–494.
Pradeep Dasigi, Nelson F Liu, Ana Marasovic, Noah A ´
Smith, and Matt Gardner. 2019. Quoref: A
reading comprehension dataset with questions requiring coreferential reasoning. arXiv preprint arXiv:1908.05803.
Alan Garnham. 2010. Models of processing: Discourse.
Wiley Interdisciplinary Reviews: Cognitive Science, 1(6):845–853.
Hans Kamp. 1981. A theory of truth and semantic representation. In Formal methods in the study of language, pages 277–322.
Kian Kenyon-Dean, Jackie Chi Kit Cheung, and Doina Precup. 2016. Verb phrase ellipsis resolution using discriminative and margin-infused algorithms.
In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1734–1743.
Payal Khullar. 2020. Exploring statistical and neural models for noun ellipsis detection and resolution in english. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 139–145.
Vid Kocijan, Ernest Davis, Thomas Lukasiewicz, Gary Marcus, and Leora Morgenstern. 2022. The Defeat of the Winograd Schema Challenge. arXiv preprint arXiv:2201.02387.
Hector Levesque, Ernest Davis, and Leora Morgenstern.
2012. The Winograd Schema Challenge. In *Thirteenth international conference on the principles of* knowledge representation and reasoning.
Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019.
Open Sesame: getting inside BERT's linguistic knowledge. *arXiv preprint arXiv:1906.01698*.
Andrea E Martin and Brian McElree. 2008. A contentaddressable pointer mechanism underlies comprehension of verb-phrase ellipsis. Journal of Memory and Language, 58(3):879–906.
Ivan A Sag and Jorge Hankamer. 1984. Toward a theory of anaphoric processing. *Linguistics and philosophy*, pages 325–345.
Lindsey Vanderlyn, Talita Anthonio, Daniel Ortega, Michael Roth, and Ngoc Thang Vu. 2022. Toward implicit reference in dialog: A survey of methods and data. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pages 587–600.
Wei-Nan Zhang, Yue Zhang, Yuanxing Liu, Donglin Di, and Ting Liu. 2019. A neural network approach to verb phrase ellipsis resolution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7468–7475.
Zhuosheng Zhang and Hai Zhao. 2022. Tracing origins:
Coreference-aware machine reading comprehension.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1281–1292.
## A Appendix A.1 Sample Instantiations
Below are sample instantiations for the Single Sentence and Two Action structures. The complete datasets for all the structures can be accessed at https://github.com/DanHardtDK/
ellipsisGPT3.
## Single Sentence
Mary went swimming, and Harold did too.
Mary went swimming, and Harold went swimming too.
Q: Did Harold go swimming?
A: Yes Mary went swimming, but Harold didn't.
Mary went swimming, but Harold didn't go swimming.
Q: Did Harold go swimming?
A: No
## Two Actions
Mary didn't go swimming but Harold did.
Mary went shopping and Harold didn't.
Mary didn't go swimming but Harold did go swimming.
Mary went shopping and Harold didn't go shopping.
Q: Did Harold go swimming?
A: Yes Q: Did Harold go shopping? A: No
Category Substitution List
| Subject | "Mary", "Harold", "Sam", "William", "The teacher", "The student", "The driver", "My friend", "John", "Elena", "Karen", "Mrs Jones" |
|-----------|--------------------------------------------------------------------------------------------------------------------------------------|
| Verb | "swimming", "shopping", "running", "walking", "skiing", "jogging", "hiking" |
Table 3: Substitutions
## A.2 Substitutions
The examples are produced using the substitutions for subjects and verbs in the different structures, as shown in table 3.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 8
✓ A2. Did you discuss any potential risks of your work?
only in the sense of limitations - there is a risk that the conclusions will not extend beyond English, and the particular models considered here
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And Appendix
✓ B1. Did you cite the creators of artifacts you used?
section 1 - GPT3 from OpenAI
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
GPT3 is freely available for research, we have made our data available on Github
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? research use of GPT3 is well established
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We created simple synthetic data and see no issues here
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 3 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
standard access to GPT3 models The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? no search of hyperparameters
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
tang-surdeanu-2023-bootstrapping | Bootstrapping Neural Relation and Explanation Classifiers | https://aclanthology.org/2023.acl-short.5 | We introduce a method that self trains (or bootstraps) neural relation and explanation classifiers. Our work expands the supervised approach of CITATION, which jointly trains a relation classifier with an explanation classifier that identifies context words important for the relation at hand, to semi-supervised scenarios. In particular, our approach iteratively converts the explainable models{'} outputs to rules and applies them to unlabeled text to produce new annotations. Our evaluation on the TACRED dataset shows that our method outperforms the rule-based model we started from by 15 F1 points, outperforms traditional self-training that relies just on the relation classifier by 5 F1 points, and performs comparatively with the prompt-based approach of CITATION (without requiring an additional natural language inference component). | # Bootstrapping Neural Relation And Explanation Classifiers
Zheng Tang, Mihai Surdeanu Department of Computer Science University of Arizona, Tucson, Arizona, USA
{zhengtang, msurdeanu}@arizona.edu
## Abstract
We introduce a method that self trains (or bootstraps) neural relation and explanation classifiers. Our work expands the supervised approach of (Tang and Surdeanu, 2022), which jointly trains a relation classifier with an explanation classifier that identifies context words important for the relation at hand, to semisupervised scenarios. In particular, our approach iteratively converts the explainable models' outputs to rules and applies them to unlabeled text to produce new annotations. Our evaluation on the TACRED dataset shows that our method outperforms the rule-based model we started from by 15 F1 points, outperforms traditional self-training that relies just on the relation classifier by 5 F1 points, and performs comparatively with the prompt-based approach of Sainz et al. (2021) (without requiring an additional natural language inference component).1 1 Introduction Recently Tang and Surdeanu (2022) proposed a supervised method that jointly trains a relation classifier (e.g., which extracts the relation per:city_of_birth between *John* and *London* in the sentence *John was born in London*) with an explanation classifier that identifies context words that are important for the relation at hand (e.g., *born* and in in the above example). One limitation of this method is that, similar to other neural approaches, it is data hungry. This is an important drawback for real-world applications where annotated data is expensive to obtain.
In this work, we expand this approach to semisupervised scenarios where the only supervision comes from a few example rules. In particular, our method iteratively converts the explanations produced by the above method into rules, and uses these rules to generate new "silver" annotations 1We release all code and data behind this work at: https://github.com/clulab/releases/tree/
master/acl2023-bootstrappingRules/.
that are added to the training data. The specific contributions of this effort are:
(1) We introduce a novel semi-supervised neurosymbolic strategy for relation extraction that is explainable and requires minimal supervision. Our approach is neuro-symbolic because it relies on rules to explain the predictions of the neural relation classifier, and also to self-label training data.
(2) We evaluate this approach on the TACRED
dataset (Zhang et al., 2017) and obtain competive results in a few-shot setting, where the only supervision comes from a small number of example rules.2 Our experiments highlight several important observations. First, our approach outperforms the model that contains the seed rules by 15 F1 points, which validates the self-training direction. Second, our method performs considerably better than a sister approach that uses the relation classifier (rather than the rules generated from explanations) for self supervision. We hypothesize that this is because the neural classifier suffers more from the "curse of dimensionality" due to its large number of parameters and the small amount of training data than our rules, which are constrained to simple syntactic patterns. Third, our approach performs comparatively with prompt-based methods (Sainz et al., 2021; Zhang et al., 2022), even though our direction is simpler as it does not require a separate natural language inference component.
2 Related Work For brevity, we focus our related work discussion on semi-supervised directions for information extraction that are closest to the proposed work: bootstrapping/self-training and recent prompt-based zero- or few-shot methods.
## 2.1 Bootstrapping/Self-Training
Typical bootstrapping methods iterate through three steps: (a) annotate seed data using a small amount 2We use an average of 7 rules per relation type in our experiments.
48 of human supervision (e.g., rules for information extraction); (b) train a model with the available annotations, and, finally, (c) apply the model on unlabeled texts to produce new "silver" annotations (Abney, 2002). These approaches were popular before the deep-learning revolution. For example, Yarowsky (1995) used bootstrapping for word sense disambiguation; Riloff (1996) used it for dictionary acquisition; and Collins and Singer (1999)
relied on bootstrapping for named entity classification. More recently, Gupta and Manning (2015)
proposed a bootstrapping algorithm for named entity extraction that expands the set of known entities using word embeddings and k-nearest neighbor clustering. Eyal et al. (2021) used a syntactic search engine (Shlain et al., 2020) to bootstrap relation extraction. They also utilized natural language generation to further augment training data, which led to improved results. To our knowledge, we are the first to apply bootstrapping to a neuro-symbolic information extraction method, providing us both generalizability and explainability.
## 2.2 **Prompt-Based Zero- Or Few-Shot Learning**
Recent large pre-trained language models (PLMs)
with huge amount of parameters have showed the ability to handle NLP tasks with only a few examples or with prompts. Sainz et al. (2021) reformatted the relation extraction task as a natural language inference (NLI) task driven by a small set of manual templates. They obtained state-of-the-art results on the TACRED relation extraction dataset (Zhang et al., 2017) in both zero- and few-shot scenarios.
The main limitation of this work is that it relies on a transformer-based NLI engine, which is not available in every domain. Wei et al. (2022) show that PLMs can perform multi-hop reasoning when using chain-of-thought prompts. Zhang et al. (2022)
propose a prompt-based rule discovery and model boosting. However, Webson and Pavlick (2022)
showed that the PLMs do not actually understand the prompt, which makes their decisions unreliable.
Unlike the prompt-based approaches, our approach does not need the specific engine, e.g., for NLI, to perform the task. This gives us more flexibility in the choice of PLM and application domain.
## 3 Approach
Similar to traditional bootstrapping (Abney, 2002), our approach iteratively trains its classifier with the currently annotated data and applies the resulting model to the raw data to produce new annotations.
R0 ← R*manual*; Dtrain ← RuleExecutor(R0, Draw);
$$\begin{array}{l}{{D_{t r a n n}\leftarrow\mathit{T u a t e E x c t a t o r}(T_{t0},D_{r a w}),}}\\ {{\mathrm{for~}i\leftarrow1\mathrm{~to~}N\mathrm{~do}}}\\ {{\left|\begin{array}{l}{{M_{i}\leftarrow\mathit{f}_{E C-R C}(D_{t r a n});}}\\ {{P_{i},E_{i}\leftarrow M_{i}(D_{t r a n});}}\\ {{R_{i}\leftarrow\mathit{R u l e G e n r a t o r}(P_{i},E_{i});}}\\ {{D_{t r a n}\leftarrow}}\\ {{D_{t r a n}+R u e E x c t o r(R_{i},D_{r a w});}}\\ {{\mathrm{end}\right.}}\end{array}$$
Algorithm 1: Pseudo code of our training procedure. R*manual* is the small set of seed rules;
Draw is the collection of unlabeled sentences.
fEC−RC is the joint explanation-relation classifier of Tang and Surdeanu (2022). Miis the
trained neural model in ith iteration, Pi and Ei
are the Mi model's outputs (labels and explanations), and Riis the set of new rules generated
from Mi's outputs.
However, unlike traditional self training, which uses the classifier to annotate data, our approach converts the current model and data into rules, and uses the generated rules to annotate data. As we discuss in Section 4 this performs better empirically.
Algorithm 1 shows the overall training procedure.
We discuss the three key components below.
(1) Rule Executor: We use Odin (ValenzuelaEscárcega et al., 2016) system as our rule executor. Common rules in this paper are syntactic patterns that contain a lexical trigger (or predicate) and syntactico-semantic arguments. These rules can be summarized as if-this-then-that patterns, e.g.: if predicate=*born* and nsubj is PERSON and nmod_in is CITY then relation=per:city_of_birth.
3 The rule executor efficiently matches these patterns over the syntactic trees of sentences.
(2) Neural Model: Our approach utilizes the approach of Tang and Surdeanu (2022). It contains two main classifiers: a relation classifier (RC),
and an explanation classifier (EC). The RC is a multiclass classifier that distinguishes between actual relation labels seen in training. The EC is a binary word-level classifier, which labels which words in the sentence are important for the relation at hand. For example, for the sentence *"[CLS]*
John was born in London.", the RC predicts a per:city_of_birth relation between *John* and 3nsubj and nmod_in are syntactic dependencies that indicate nominal subject and indirect object attached to the verb through the preposition in, respectively.
London, and the EC identifies which words are critical for this relation (*born* and in). The EC and RC
are trained jointly: the RC relies only on the hidden states of the context words identified by the EC
(rather than, say, the [CLS] embedding); the EC is trained in a semi-supervised way, i.e., to maximize the probability of the correct RC label.
(3) Rule Generator: The rule generator has two major components: the generator and the filter. The generator takes the model output from the neural model above and produces rules by: (a) connecting the EC output to the trigger of the rule; (b) generating subject and object arguments that are connected to the trigger through the shortest syntactic dependency path, and (c) assigning the RC output (the label) to this syntactic pattern. The filter takes the rules produced by the generator, applies them to a validation set and evaluates their precision. If a rule's performance is below a certain threshold, the filter discards it.
## 3.1 Training Procedure
In iteration 0, we feed the seed rules R0 to the rule executor which applies them on the unlabeled sentence set Draw. These rules are a small set of rules written by human annotators. We add the rulematched data as seed annotations to the labeled data set D*train* and remove them from Draw.
In iteration i, we train the neural model Mi with all labeled data in D*train* and use it to labeled the current Draw. Then, we generate and filter the rules that explain the sentences in Draw using the rule generator. Next, we feed the newly generated rules Ri+1 to the rule executor, apply them over Draw, and produced new labeled data, i.e., sentences with labeled relations. Lastly, we add the newly labeled data to D*train* and remove the corresponding sentences from Draw. We repeat this procedure until performance converges on a validation set.
## 4 Experimental Results 4.1 Data Preparation
We report results on the TACRED relation extraction (RE) dataset (Zhang et al., 2017). To mimic low-resource scenarios, we hide all gold labels from the training set. We keep only 1% of the development set labeled for tuning purposes. We use as seeds (R0) the rule set from (Tang and Surdeanu, 2022), which is a combination of the surface patterns of Angeli et al. (2015), and syntactic rules written in the Odin language (ValenzuelaEscárcega et al., 2016), which were manually created by Tang and Surdeanu (2022). Overall, we use an average of 7 rules per relation type. Tang and Surdeanu (2022) indicated that these rules did not require considerable effort, i.e., they were developed by one of the authors within a few hours.
## 4.2 Baselines
We compare our results with four baselines:
an extended version of the rule-based approach of Angeli et al. (2015), a typical self-training approach, a prompt-based RE approach based on natural language inference (NLI) (Sainz et al., 2021),
and a prompt-based rule discovery and boosting approach (Zhang et al., 2022):
Rule-based extraction: This baseline uses only the two sets of rules in our seed set (R0): (a) the surface rules from (Angeli et al., 2015), which are executed in the Stanford CoreNLP pipeline (Manning et al., 2014); and (b) the syntactic rules of Tang and Surdeanu (2022), which are executed in the Odin framework.4 Self-training: This baseline is similar with our full method, with the exception that, in each iteration, we use the trained RC model to label new data rather than the generated rules.
NLI-prompt: (Sainz et al., 2021) reformulated the RE task as an entailment task driven by templates.
They manually generated a number of verbalization templates for each relation in TACRED, e.g.,
the per:city_of_birth relation is verbalized as as
{subj} was born in {obj}, where {subj} and {obj}
will be replaced with the entities in the given sentence. Thus, the sentence containing the relation to classify becomes the premise and the verbalized template the hypothesis. The RE task is then reduced to finding the best entailment template for the given sentence. no_relation is generated if no entailment score over a certain threshold is observed.
PRBOOST iteratively generates rules from prompting, asks a human expert to filter the rules, use the rules to generate new annotations, and, lastly, use the annotations to train a new model
(Zhang et al., 2022).
## 4.3 Implementation And Evaluation Details
For our method we follow the same implementation details and hyper parameters as Tang and Surdeanu
(2022). The only difference is that instead of using 4The rule set from (Angeli et al., 2015) also included some syntactic rules, but we found out that they only matched the simpler per:title relation, so we did not use them.
| Approach | Precision | Recall | F1 |
|---------------|-------------|----------|-------|
| Baselines | | | |
| Rules | 85.82 | 24.21 | 37.77 |
| Self-training | 65.58 | 38.56 | 48.56 |
| NLI-prompt7 | 55.46 | 52.09 | 53.72 |
| PRBOOST | - | - | 48.1 |
| Our Approach | | | |
| Iteration 4 | 67.10 | 45.14 | 53.97 |
the full development set, we randomly select 1%
from the TACRED development set for tuning, i.e.,
to decide which generated rules to keep, and to decide when the bootstrapping training procedure completes. For the former, we used 0.5 as the threshold; that is, if the precision of a rule is lower than the threshold, we discard that rule.
For a fair comparison, for the NLI-prompt approach of Sainz et al. (2021) we chose their zeroshot scenario5and RoBERTa (Liu et al., 2019).6 Further, to guarantee the same level of supervision, we converted our seed rules to their verbalization templates (see Appendix A for the conversion procedure). Lastly, we estimate their threshold for no_relation using the same validation dataset as our approach. We iterated from 0.1 to 0.9 with a step of 0.1, and observed the best validation results for a threshold of 0.8.
## 4.4 Results And Discussion
Table 1 reports the overall performance of our approach and the four other methods. For PRBOOST
we used the numbers reported in the corresponding paper. We draw the following observations:
(1) As expected, the rule-based baseline has high precision but suffers from low recall. In contrast, our best model that is bootstrapped from the same rules has 20% higher recall and 15% higher F1
(absolute). This indicates that the bootstrapping approaches popularized for information extraction several decades ago remain valid in the neural era.
(2) Our approach performs statistically significantly better than the traditional self-training approach that uses the relation classifier for self la-
![3_image_0.png](3_image_0.png)
beling (53.97 vs. 48.56 F1)8. The fact that rules perform better for self labeling than the actual neural model is somewhat surprising. Our hypothesis is that the neural model suffers more from overfitting due to its large number of parameters and the relatively small amount of training data. Rules generalize better (and thus produce better "silver" labels) because the simple syntactic patterns generated provide reduced opportunities for overfitting.
To validate this hypothesis we plot the learning curves of the two approaches on our validation partition in Figure 1.
9 These curves indicate that the best performance of our approach is in iteration 4, while the neural self-training continues to improve on validation until iteration 9. However, as shown in Figure 2, the performance of the modelbased self-training on test saturates after iteration 4, which suggests that, indeed, the neural self-training method suffers from overfitting.
(3) Our method performs better than PRBOOST
and similarly to the NLI-prompt method. This suggests that self-training, when carefully implemented, remains competitive with more modern alternatives such as prompt-based methods. More importantly, our approach is simpler, as it does not need the extra inference layers, e.g., the NLI
classifier in the NLI-prompt approach.
## 4.5 Error Analysis
We conclude this section with a brief error analysis that compares our rule-based bootstrapping approach with the "traditional" neural-model-based self-training approach.
First, we conducted a comparative analysis of 8We performed statistical significance analysis using nonparametric bootstrap resampling with 1000 iterations.
9Appendix B contains a more detailed curve for our approach including precision, recall, and F1.
Sean Parker , a 17-year-old student , was portraying a casualty clutching a head injury caused by a falling classroom fan .
Gold label: per:title Rule label: per:title NN Model label: no_relation Gilchrist teamed with Chris Simcox , a newspaper publisher in Tucson , Ariz. , to form the controversial Minuteman project ,
which drew nearly 900 volunteers to Arizona in April .
Gold label: no_relation Rule label: per:title NN Model label: no_relation Table 2: Example outputs from a per:title rule, The subject and object entities, which are provided in the task
![4_image_0.png](4_image_0.png)
input, are highlighted in blue and orange. The important tokens for explainability identified by the various methods are highlighted in red.
the annotations produced by the two methods after the first iteration. In this setting, both approaches were trained on the same seed annotations, which ensures a fair comparison. Out of all positive examples in training data (excluding the seed examples),
our approach annotated 7.64% of them correctly, while self-training annotated only 3.70% correctly. Among these true positives produced by the neural bootstrapping, 75.68% of them are also annotated correctly by our approach. This indicates that our generated rules not only cover most of the neural model's annotations, but also correctly annotate more uncovered instances.
Table 2 shows a case where the neural-modelbased self-training method falls short (first row in the table) and a case where bootstrapping does not seem to help (second row). These two cases are extracted by the same rule, in which the trigger words ", a" are used to connect SUBJ_PERSON
and OBJ_TITLE entities through <punct or <punct appos syntactic dependencies. This rule matches 120 examples in the training set, 102 of which are true positive. Importantly, only 67 of the 120 examples are uncovered by the neural bootstrapping model, which highlights again the increased coverage of our rule-based method. Interestingly, while the label produced by the rule-based bootstrapping model for the second example in the table is technically wrong, in the opinion of the authors the gold label is incorrect here. This suggests that rules not only improve self-training, but have the potential to also improve the consistency of training data.
## 5 Conclusion
We introduced a method that self trains (or bootstraps) a neuro-symbolic approach for relation extraction that combines neural relation and explanation classifiers. Unlike traditional self-training, our approach uses rules for bootstrapping. In particular, our method iteratively converts the explainable models' outputs to rules and applies them to unlabeled text to produce new annotations. We evaluated our approach in a low-resource scenario where there is no labeled data, and the only supervision comes from a small number of seed patterns. Our experiments showed that using rules in the bootstrapped training procedure is better than the typical self-training method that relies on neural model predictions. Further, we show that we obtain similar performance with prompt-based models for relation extraction without the additional NLI component required by such approaches.
## Limitations
In this work we have tested our approach using SpanBERT, a relatively small model when compared to, say, DeBERTa_large or GPT. SpanBERT
has been reported to obtain state-of-the-art performance for relation extraction (Joshi et al., 2020; Tang and Surdeanu, 2022), but it is unclear if a larger LM would improve this semi-supervised learning setting.
We use both surface patterns (in the tokensregex
(Chang and Manning, 2014) format) and syntactic patterns (Odin (Valenzuela-Escárcega et al., 2016))
as training seeds, but our approach can only produce syntactic patterns as outputs. This is not ideal, since there is empirical evidence showing that the mixed representation for rules may provide better performance. For example, we can easily capture per_title relation with a surface rule such as
"{obj_title} {subj_person}", which simply looks for the two entities being adjacent.
## Ethics Statement
This work did not involve human annotations, other than the set of rules used as seeds (Angeli et al.,
2015; Tang and Surdeanu, 2022).
It is unlikely but possible that the automaticallygenerated rules we used during bootstrapping augment some unknown biases in the unlabeled data.
In a brief analysis of the data we did not observe any such situations. However, this potential undesired side effect is important and should not be ignored in the eventual deployment of this method in real-world applications.
## References
Steven Abney. 2002. Bootstrapping. In *Proceedings of* the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 360–367, USA.
Association for Computational Linguistics.
Gabor Angeli, Victor Zhong, Danqi Chen, A. Chaganty, J. Bolton, Melvin Jose Johnson Premkumar, Panupong Pasupat, S. Gupta, and Christopher D.
Manning. 2015. Bootstrapped self training for knowledge base population. Theory and Applications of Categories.
Angel X Chang and Christopher D Manning. 2014. Tokensregex: Defining cascaded regular expressions over tokens. Stanford University Computer Science Technical Reports. CSTR, 2:2014.
Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In 1999 Joint SIGDAT conference on empirical methods in natural language processing and very large corpora.
Matan Eyal, Asaf Amrami, Hillel Taub-Tabib, and Yoav Goldberg. 2021. Bootstrapping relation extractors using syntactic search by examples. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1491–1503, Online. Association for Computational Linguistics.
Sonal Gupta and Christopher D. Manning. 2015. Distributed representations of words to guide bootstrapped entity classifiers. In *Proceedings of the 2015*
Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1215–1220, Denver, Colorado. Association for Computational Linguistics.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky.
2014. The Stanford CoreNLP natural language processing toolkit. In *Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations*, pages 55–60, Baltimore, Maryland. Association for Computational Linguistics.
Ellen Riloff. 1996. Automatically generating extraction patterns from untagged text. In *AAAI/IAAI, Vol. 2*.
Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, and Eneko Agirre. 2021. Label verbalization and entailment for effective zero and fewshot relation extraction. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 1199–1212, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Micah Shlain, Hillel Taub-Tabib, Shoval Sadde, and Yoav Goldberg. 2020. Syntactic search by example.
In *Proceedings of the 58th Annual Meeting of the* Association for Computational Linguistics: System Demonstrations, pages 17–23, Online. Association for Computational Linguistics.
Zheng Tang and Mihai Surdeanu. 2022. It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers. Computational Linguistics, pages 1–40.
Marco A. Valenzuela-Escárcega, Gus Hahn-Powell, and Mihai Surdeanu. 2016. Odin's runes: A rule language for information extraction. In *Proceedings of* the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 322–329, Portorož, Slovenia. European Language Resources Association (ELRA).
Albert Webson and Ellie Pavlick. 2022. Do promptbased models really understand the meaning of their prompts? In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2300–2344, Seattle, United States.
Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189–196.
Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, and Chao Zhang. 2022. Prompt-based rule discovery and boosting for interactive weakly-supervised learning.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 745–758, Dublin, Ireland.
Association for Computational Linguistics.
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP
2017), pages 35–45.
## A From Rules To Nli Templates
To guarantee the same level of supervision between our approach and NLI-prompt, we converted our seed rules to their verbalization templates. To convert a rule to a verbalized template, we first apply the rule to the available texts, extract the shortest span that covers the trigger and the subject/object arguments, and extract this shortest text span as the verbalized template. The actual template is the span with entities replaced with placeholders *{subj}*
and *{obj}*. For example, for the sentence *"[CLS]*
John was born in London.", the shortest span is
"John was born in London", and the template will be *"{subj} was born in {obj}."*
## B Learning Curve
![6_Image_0.Png](6_Image_0.Png)
Figure 3 shows the changes in precision, recall, and F1 scores over multiple iterations on the validation set. As shown, the recall and F1 are steadily increasing during this procedure. This is inspiring since it shows that our approach can help improve the generalizability of the neural model in the lowresource scenario. Further, we note that the drop in precision is the reason the F1 score stops improving after iteration 4. However, this is solvable since our annotations are from the rules and there are ways to control the quality of the rules other than just filtering out the low precision ones. We leave this analysis as future work.
## C Experimental Details
We follow the same details from Tang and Surdeanu (2022)'s experiments. Table 3 shows the hyperparameter details for training.
| Number of iterations | 10 |
|------------------------|---------------------|
| Number of epochs | 20 |
| Learning rate | 1e-5 |
| Dropout rate | 0.1 |
| Batch size | 32 |
| Max sequence length | 128 |
| Scheduler | Linear with warm-up |
Table 3: Hyperparameter details for training.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
At the beginning of page 5.
✓ A2. Did you discuss any potential risks of your work?
Discussed in the limitations section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
nowak-cotterell-2023-fast | A Fast Algorithm for Computing Prefix Probabilities | https://aclanthology.org/2023.acl-short.6 | Multiple algorithms are known for efficiently calculating the prefix probability of a string under a probabilistic context-free grammar (PCFG). Good algorithms for the problem have a runtime cubic in the length of the input string. However, some proposed algorithms are suboptimal with respect to the size of the grammar. This paper proposes a new speed-up of Jelinek and Lafferty{'}s (1991) algorithm, which runs in $O(n^3|N|^3 + |N|^4)$, where n is the input length and |N| is the number of non-terminals in the grammar. In contrast, our speed-up runs in $O(n^2|N|^3 + n^3|N|^2)$. |
## A Fast Algorithm For Computing Prefix Probabilities Franz Nowak Ryan Cotterell Eth Zürich {Fnowak, Rcotterell}@Ethz.Ch Abstract
Multiple algorithms are known for efficiently calculating the prefix probability of a string under a probabilistic context-free grammar
(PCFG). Good algorithms for the problem have a runtime cubic in the length of the input string.
However, some proposed algorithms are suboptimal with respect to the size of the grammar.
This paper proposes a novel speed-up of Jelinek and Lafferty's (1991) algorithm, which runs in O(N3|N |3 + *|N |*4), where N is the input length and *|N |* is the number of non-terminals in the grammar. In contrast, our speed-up runs in O(N2|N |3 + N3*|N |*2).
https://github.com/rycolab/
prefix-parsing
## 1 Introduction
Probabilistic context-free grammars (PCFGs) are an important formalism in NLP (Eisenstein, 2019, Chapter 10). One common use of PCFGs is to construct a language model. For instance, PCFGs form the backbone of many neural language models, e.g., recurrent neural network grammars (RNNGs; Dyer et al., 2016; Dyer, 2017; Kim et al., 2019). However, in order to use a PCFG as a language model, one needs to be able to compute prefix probabilities, i.e., the probability that the yield of a derivation starts with the given string. In notation, given a string w = w1 *· · ·* wN , we seek the probability p(S ∗⇒ w *· · ·*) where S is the distinguished start symbol of the grammar and ∗⇒ is the closure over applications of derivation rules of the grammar.1 Our paper gives a more efficient algorithm for the simultaneous computation of the prefix probabilities of all prefixes of a string w under a PCFG.
The authors are aware of two existing efficient algorithms to compute prefix probabilities under a PCFG.2 The first is Jelinek and Lafferty's (1991)
1Specifically, α
∗⇒ β means that there exists an n ≥ 0 such that α *⇒ · · · ⇒* | {z }
n times β, where ⇒ marks a derivation step.
2Upon publication of this work, the authors were made aware of two other algorithms for finding prefix probabilities in the special case of idempotent semirings (Corazza et al.
1994; Sánchez and Benedí 1997). See App. B for a discussion of prefix parsing under a semiring.
algorithm which is derived from CKY (Kasami, 1965; Younger, 1967; Cocke and Schwartz, 1970)
and, thus, requires the grammar to be in Chomsky normal form (CNF). Jelinek–Lafferty runs in O(N3|N |3 + *|N |*4) time, where N is the length of the input and N is the number of non-terminals of the grammar, slower than the O(N3*|N |*3)
required for parsing with CKY, when the number of non-terminals *|N |* is taken into account.
The second, due to Stolcke (1995), is derived from Earley parsing (Earley, 1970) and can parse arbitrary PCFGs,3 with a runtime of O(N3*|N |*3).
Many previous authors have improved the runtime of Earley's (Graham et al., 1980; Leermakers et al., 1992; Moore, 2000, *inter alia*), and Opedal et al. (2023) successfully applied this speed-up to computing prefix probabilities, achieving a runtime of O(N3|G|), where |G| is the size of the grammar, that is, the sum of the number of symbols in all production rules.
Our paper provides a more efficient version of Jelinek and Lafferty (1991) for the computation of prefix probabilities under a PCFG in CNF. Specifically, we give an O(N2|N |3 + N3*|N |*2) time algorithm, which is the fastest attested in the literature for dense grammars in CNF,4 matching the complexity of CKY adapted for dense grammars by Eisner and Blatz (2007).5 We provide a full derivation and proof of correctness, as well as an open-source implementation on GitHub. We also briefly discuss how our improved algorithm can be extended to work for semiring-weighted CFGs.
## 2 Preliminaries
We start by introducing the necessary background on probabilistic context-free grammars.
3Note that Earley's and, by extension, Stolcke's algorithms also implicitly binarize the grammar during execution by using dotted rules as additional non-terminals.
4A PCFG in CNF is dense if for every X, Y, Z ∈ N , we have a production rule X → Y Z ∈ R.
5Note that there exist approximate parsing algorithms with lower complexity bounds (Cohen et al., 2013). Moreover, there are parsing algorithms that asymptotically run in subcubic time in the input length using fast matrix multiplication
(Valiant, 1975; Benedí and Sánchez, 2007). However, they are of limited practical use (Lee, 1997).
57 Definition 1. A **probabilistic context-free grammar** *(PCFG) is a five-tuple* G = (N , Σ, S, R, p),
made up of:
- *An finite set of non-terminal symbols* N ;
- *An alphabet of terminal symbols* Σ;
- A distinguished start symbol S ∈ N ;
- A finite set of production rules *R ⊂ N ×*
(N ∪ Σ)∗ *where each rule is written as* X −→
α with X ∈ N and α ∈ (N ∪ Σ)∗*. Here,* ∗
denotes the Kleene closure;
- A weighting function p : R → [0, 1] assigning each rule r ∈ R *a probability such that* p is **locally normalized***, meaning that for all* X ∈ N *that appear on the left-hand side of a* rule, P
X−→α∈R
p(X −→ α) = 1.
Note that not every locally normalized PCFG constitutes a valid distribution over Σ∗. Specifically, some may place probability mass on infinite trees
(Chi and Geman, 1998). PCFGs that do constitute a valid distribution over Σ∗are referred to as **tight**.
Furthermore, if all non-terminals of the grammar can be reached from the start non-terminal via production rules, we say the PCFG is **trim**.
Definition 2. *A PCFG* G = (N , Σ, S, R, p) *is in* Chomsky normal form (CNF) if each production rule in R *is in one of the following forms:*
$$\begin{array}{l l}{{\mathrm{X}\to\mathrm{Y}\,\mathrm{Z}}}&{{\qquad\qquad\qquad(1)}}\\ {{\mathrm{X}\to a}}&{{\qquad\qquad\qquad(2)}}\\ {{\mathrm{S}\to\varepsilon}}&{{\qquad\qquad\qquad(3)}}\end{array}$$
where X, Y, Z ∈ N are non-terminals, a ∈ Σ are terminal symbols, and ε *is the empty string.*
Definition 3. A **derivation step** α ⇒ β *is an application of the binary relation* ⇒: (N ∪ Σ)∗ × (N ∪
Σ)∗*, which rewrites the left-most non-terminal in* α according to a rule in R *from the left-hand side* of that rule to its right-hand side, resulting in β.
Definition 4. A **derivation** *under a grammar* G is a sequence α0, α1, · · · , αm, where α0 ∈
N , α1, · · · , αm−1 ∈ (N ∪ Σ)∗*, and* αm ∈ Σ∗,
in which each αi+1 is formed by applying a derivation step to αi. αm = w1 · · · wN ∈ Σ∗*is called* the **yield** of the derivation. If α0 is not the start symbol S, we call it a **partial derivation***. We write* α0∗⇒ w1 · · · wN , where ∗⇒ is the closure over the binary relation ⇒ *introduced in definition* 3.
We represent derivations as trees whose structure corresponds to production rules, where any parent node is the non-terminal on the left-hand side of a rule and its children are the symbols from the right-hand side. The leaves of the tree, when read from left to right, form the yield. Such a tree, when rooted S, is called a **derivation tree**. Otherwise, it is called a **derivation subtree**.
Definition 5. The probability of a derivation tree
(or derivation subtree) τ *is the product of the probabilities of all its corresponding production rules:*
$$p(\mathbf{\tau})\ {\stackrel{\mathrm{def}}{=}}\quad\prod_{(\mathbf{\alpha}\to\mathbf{\beta})\in\mathbf{\tau}}\ p(\mathbf{\alpha}\to\mathbf{\beta})\qquad\qquad(4)$$
Definition 6. We define TX(wi· · · wk) *as the set* of all derivation subtrees τ rooted at X *with yield* wi*· · ·* wk. Definition 7. *Given a PCFG* G = (N , Σ, S, R, p),
a non-terminal X ∈ N *, and a string* w =
w1 · · · wN ∈ Σ∗, the **inside probability** of X between indices i and k (where 1 ≤ i ≤ k ≤ N*) is* defined as:
$$\beta(i,k\mid\mathrm{X})\ {\stackrel{\mathrm{def}}{=}}\ p(\mathrm{X}\ {\stackrel{*}{\Rightarrow}}\ w_{i}\cdots w_{k})\qquad\qquad(5)$$ $$=\sum_{\tau\in{\cal T}_{\mathrm{X}}(w_{i}\cdots w_{k})}p(\tau)\qquad\qquad(6)$$
That is, the sum of the probability of all derivation trees τ starting at X that have yield wi*· · ·* wk. Definition 8. *Given a PCFG* G = (N , Σ, S, R, p),
a non-terminal X ∈ N *, and a string* w = w1 · · · wN ∈ Σ∗, we define the *prefix probability* pπ, i.e., the probability of w *being a prefix under* G*, to be:*
$$p_{\pi}(\mathbf{w}\mid\mathbf{X})\ {\stackrel{\mathrm{def}}{=}}\ \sum_{\mathbf{u}\in\Sigma^{*}}p(\mathbf{X}\ {\stackrel{*}{\Rightarrow}}\ \mathbf{w}\mathbf{u})$$
$$\mathbf{\Pi}$$
In words, pπ is the probability of deriving w with an arbitrary continuation from X, that is, the sum of probabilities of deriving wu from X over all possible suffixes u ∈ Σ∗. In the following, we write the prefix probability of deriving prefix w = wi*· · ·* wk from X as pπ(*i, k* | X).
Definition 9. Let G be a PCFG in CNF. Then for non-terminals X, Y, Z ∈ N , the **left-corner** expectations Elc(Y | X) and Elc(Y Z | X) *are, respectively, defined as:*
$$E_{\rm lc}({\rm Y}\mid{\rm X})\stackrel{{\rm def}}{{=}}\sum_{\alpha\in{\cal N}^{*}}p({\rm X}\stackrel{{\rm s}}{{\Rightarrow}}{\rm Y}\alpha)\tag{8}$$ $$E_{\rm lc}({\rm Y}\,{\rm Z}\mid{\rm X})\stackrel{{\rm def}}{{=}}\sum_{{\rm X}^{\prime}\in{\cal N}}E_{\rm lc}({\rm X}^{\prime}\mid{\rm X})\cdot p({\rm X}^{\prime}{\rightarrow}{\rm Y}\,{\rm Z})\tag{9}$$
(8) $\huge\color{green}{\text{(Z)}}$ (9) .
Algorithm 1 CKY
1: def CKY(w = w1 *· · ·* wN , G):
2: ▷ *Initialize inside probabilities*
3: β(·, *· | ·*) ←− 0
4: for k ∈ 1*, . . . , N* :
5: for X −→ wk ∈ R :
6: ▷ *Handle single word tokens*
7: β(k, k | X)←−β(*k, k* | X)+p(X −→ wk)
8: ▷ ℓ *is the span size*
9: for ℓ ∈ 2*, . . . , N* :
10: ▷ i *marks the beginning of the span*
11: for i ∈ 1*, . . . , N* − ℓ + 1 :
12: ▷ k *marks the end of the span*
13: k ←− i + ℓ − 1
14: ▷ *Recursively compute* β
for X $\rightarrow$ Y Z $\in$ R : $\beta(i,k\mid$ X) $\leftarrow$$\beta(i,k\mid$ X) + $p($X $\rightarrow$ Y Z) $\cdot$$\sum\limits_{j=i}^{k-1}\beta(i,j\mid$ Y) $\cdot$$\beta(j+1,k\mid$ Z) **return**$\beta$
17: **return** β
Algorithm 2 Jelinek–Lafferty
1: def JL(w = w1 *· · ·* wN , G):
2: pπ(·, · | ·) ←− 0 ▷ *Initialize prefix probabilities*
3: β ←− CKY(w) ▷ Precompute β *with Algorithm* 1
4: for Xi, Xj ∈ N : ▷ *Precompute* Elc(Y | X)
5: Elc(Xj | Xi) ←− h(I − P) −1i ij 6: for X′ −→ Y Z ∈ R : ▷ Precompute Elc(Y Z | X) 7: Elc(Y Z|X)←− P X∈N Elc(X′|X) · p(X′−→Y Z) 8: for k ∈ 1, . . . , N : 9: for X ∈ N : ▷ Compute base case 10: pπ(k, k | X)←− P Y∈N Elc(Y|X)·p(Y−→wk) 11: for ℓ ∈ 2 . . . N : 12: for i ∈ 1 . . . N − ℓ + 1 : 13: k ←− i + ℓ − 1 14: for X, Y, Z ∈ N : ▷ Recursively compute pπ 15: pπ(i, k | X) ←− pπ(i, k | X) + Elc(YZ|X) · kP−1 j=i β(i, j | Y) · pπ(j+1, k | Z)
16: **return** pπ
![2_image_0.png](2_image_0.png)
Figure 1: Pseudocode for the CKY algorithm (left) and Jelinek–Lafferty (right)
![2_image_1.png](2_image_1.png)
The left-corner expectation Elc(Y | X) is hence the sum of the probabilities of partial derivation subtrees rooted in X that have Y as the left-most leaf; see Fig. 2a for a visualization. Similarly, Elc(Y Z | X) is the sum of the probabilities of partial derivation subtrees that have Y and Z as the leftmost leaves; see Fig. 2b.
## 3 Jelinek And Lafferty **(1991)**
We now give a derivation of the Jelinek–Lafferty algorithm. The first step is to derive an expression for the prefix probability in PCFG terms.
Lemma 1. Given a tight, trim PCFG in CNF and a string w = w1 · · · wN *, the prefix probability of* a substring wi· · · wk of w*, can be defined recursively as follows:*
$$p_{\pi}(i,k\mid\mathrm{X})=\sum_{\mathrm{Y,Z\in\mathcal{N}}}E_{\mathrm{lc}}(\mathrm{Y\,Z\mid X})\tag{10}$$ $$\cdot\sum_{j=i}^{k-1}\beta(i,j\mid\mathrm{Y})\cdot p_{\pi}(j+1,k\mid\mathrm{Z})$$
Proof. A proof of Lemma 1 is given in App. A.
The above formulation of the prefix probability is closely related to that of the inside probability from Baker's (1979) inside–outside algorithm, which can be efficiently computed using CKY, see Algorithm 1.
Next, the left-corner expectations Elc as defined by Eq. (8) can be computed efficiently as follows. Let P denote the square matrix of dimension *|N |*, with rows and columns indexed by the non-terminals N (in some fixed order), where the entry at the i th row and the j th column corresponds to p(Xi −→ Xj •), i.e., the probability of deriving 59 Xj on the left corner from Xiin one step:
$$p(\mathrm{X}_{i}\to\mathrm{X}_{j}\ {\bullet})\ {\stackrel{\mathrm{def}}{=}}\ \sum_{\mathrm{Y\in\mathcal{N}}}p(\mathrm{X}_{i}\to\mathrm{X}_{j}\ \mathrm{Y})$$
p(Xi −→ Xj Y) (11)
We can find the probability of getting to nonterminal Xj after k derivation steps starting from Xi by multiplying P with itself k times:
$$p(\mathrm{X}_{i}\stackrel{k}{\rightarrow}\mathrm{X}_{j}\bullet)=(P^{k})_{\mathrm{i,j}}$$
k)i,j (12)
We can hence get the matrix P∗, whose entries correspond to deriving Xj from Xi after any number of derivation steps, by summing over all the powers of the matrix P:
6
$$P^{*}\stackrel{{\rm def}}{{=}}I+P+P^{2}+P^{3}+\cdots=\sum_{n=0}^{\infty}P^{n}\tag{13}$$ $$=I+P\sum_{n=0}^{\infty}P^{n}=I+PP^{*}=(I-P)^{-1}$$
Note that the entry at the i th row and j th column of P∗is exactly the left-corner expectation Elc(Xj | Xi). Finally, we can compute the leftcorner expectations Elc(Y Z | X) using Eq. (9):
$$E_{\mathrm{lc}}(\mathrm{Y\,Z\mid X})\ {\stackrel{\mathrm{def}}{=}}\ \sum_{X^{\prime}\in{\mathcal{N}}}E_{\mathrm{lc}}(X^{\prime}\mid\mathrm{X})\cdot p(\mathrm{X^{\prime}\!\to\!Y\,Z})$$
Similarly, we can compute the base case of the recursive Eq. (10), namely pπ(*k, k* | X), which is defined as follows.
Definition 10. *The prefix probability of the token* at position k being derived from X *is defined as:*
$$p_{\pi}(k,k\mid\mathrm{X})\stackrel{{\mathrm{def}}}{{=}}\sum_{\mathrm{Y\in\mathcal{N}}}E_{\mathrm{lc}}(\mathrm{Y}\mid\mathrm{X})\cdot p(\mathrm{Y\to}w_{k})\tag{14}$$
We can now combine the quantities derived above to obtain an efficient algorithm for the computation of prefix probabilities pπ(*i, k* | S). For the full algorithm, see Algorithm 2.
Proposition 1. The time complexity of the CKY algorithm as presented in Algorithm 1 is O(N3*|N |*3).
Proof. Clearly, the computationally critical part is in lines 9–13, where we iterate over all indices of w for i, j, and k, as well as over the whole set of grammar rules, thus taking O(N3|R|). In a PCFG
in CNF, with the size of the alphabet taken as constant, the number of rules, |R|, is O(*|N |*3), making the overall complexity of CKY O(N3*|N |*3). ■
6Note that this sum converges if the PCFG is tight and trim since infinite derivation (sub)trees have zero probability mass.
$$(11)$$
Proposition 2. The total time complexity of Jelinek–
Lafferty is O(N3|N |3 + *|N |*4):
Proof. 1. We begin by pre-computing all the inside probabilities β in line 2 of Algorithm 2, which takes O(N3*|N |*3) by Proposition 1.
$\left(12\right)^{2}$
2. Next, in lines 3–4, we pre-compute all the left-corner expectations Elc(Y | X) using Eq. (13), which has the complexity of inverting the matrix P, i.e., O(*|N |*3).
3. In lines 5–7, we then use Eq. (9) to compute Elc(Y Z | X), iterating once over all non-terminals X for each rule, which takes O(*|R||N |*), that is, O(*|N |*4).
4. Computing pπ(*k, k* | X) for all X ∈ N by Eq. (14) in lines 8–10 takes O(N*|N |*2) as we iterate over all positions k ∈ N and over all Y ∈ N for each X ∈ N .
5. And finally, computing the pπ chart in lines 11–14 takes O(N3*|N |*3) since we iterate over all *ℓ, i, j* ≤ N and X, Y, Z ∈ N .
6. This yields an overall time complexity of O(N3|N |3 + *|N |*4).
■
## 4 Our Speed-Up
We now turn to our development of a faster dynamic program to compute all prefix probabilities.
The speed-up comes from a different way to factorize pπ(*i, k* | X), which allows additional memoization. Starting with the definition of the prefix probability in Eq. (15a), we first expand Elc(Y Z | X)
by Eq. (9), as seen in Eq. (15b). Then, we factor out all terms that depend on the left-corner nonterminal Y in Eq. (15c), which we store in a chart γ, see Eq. (15e). We then do the same for all terms depending on X′, factoring them out in Eq. (15d)
and storing them in another chart δ, see Eq. (15f).
Our improved algorithm for computing all prefix probabilities is shown in Algorithm 3.
Proposition 3. The complexity of our improved algorithm is O(N2|N |3 + N3*|N |*2).
Proof. 1. As before, computing Elc(Y | X) and pπ(*k, k* | X) takes O(*|N |*3) and O(N*|N |*2),
respectively.
pπ(i, k | X) = X Y,Z∈N Elc(Y, Z | X) · X k−1 j=i β(i, j | Y) · pπ(j+1, k | Z) (15a) X′∈N Elc(X ′| X) · p(X ′ −→ Y Z) · X k−1 =X Y,Z∈N X j=i β(i, j | Y) · pπ(j+1, k | Z) (15b) =X X′,Z∈N Elc(X ′| X) · X k−1 j=i γij (X ′, Z) · pπ(j+1, k | Z) (15c) Z∈N X k−1 = X j=i δij (X, Z) · pπ(j+1, k | Z) (15d) where γij (X ′, Z) def = X Y∈N p(X ′ −→ Y Z) · β(i, j | Y) (15e) and δij (X, Z) def =X X′∈N Elc(X ′| X) · γij (X ′, Z) (15f)
$$\left(15\text{a}\right)$$ $$\left(15\text{b}\right)$$ $$\left(15\text{c}\right)$$ $$\left(15\text{d}\right)$$ $$\left(15\text{e}\right)$$ $$\left(15\text{f}\right)$$ $$\left(15\text{f}\right)$$ ...
Algorithm 3 Faster prefix probability algorithm
1: def FastJL(w = w1 *· · ·* wN , G):
2: pπ(·, · | ·) ←− 0 ▷ *Initialize prefix probabilities*
3: β ←− CKY(w) ▷ Precompute β *with Algorithm* 1
4: for Xi, Xj ∈ N : ▷ *Precompute* Elc(Y | X)
5: Elc(Xj | Xi) ←− -I − P)−1ij
6: for *i, j* = 1*, . . . , N* :
7: for X, Z ∈ N : ▷ Precompute γ *by Eq.* (15e)
8: γij (X,Z)←− P
Y∈N
p(X−→YZ)·β(*i, j* | Y)
9: for X, Z ∈ N : ▷ Precompute δ *by Eq.* (15f)
10: δij (X,Z)←− P
Y∈N
Elc(Y | X) · γij (Y, Z)
11: for k ∈ 1*, . . . , N,* for X ∈ N : ▷ *Base case*
12: pπ(*k, k* | X)←− P
Y∈N
Elc(Y|X)·p(Y−→wk)
13: for ℓ ∈ 2 *. . . N* :
14: for i ∈ 1 *. . . N* − ℓ + 1 :
15: k ←− i + ℓ − 1
16: for X, Z ∈ N : ▷ *Recursively compute* pπ
17: pπ(i, k | X) ←− pπ(*i, k* | X) +
kP−1
j=i
δij (X, Z) · pπ(j+1, k | Z)
18: **return** pπ
2. As Eisner and Blatz (2007) show, one can compute β in O(N2|N |3+N3*|N |*2), thus improving the runtime of Algorithm 1 for dense grammars.
3. Pre-computing γ and δ in lines 5–9 takes O(N2*|N |*3), as we sum over non-terminals, and both charts each have two dimensions indexing N and two indexing N .
4. The loops computing pπ in lines 13–17 take O(N3*|N |*2), as we are now iterating over X, Z ∈ N and *ℓ, i, j* ≤ N.
5. Hence, our new overall time complexity is O(N2|N |3 + N3*|N |*2).
■
## 5 Generalization To Semirings
It turns out that Jelinek–Lafferty, and, by extension, our improved algorithm, can be generalized to work for semiring-weighted CFGs, with the same time complexity, under the condition that the weights are locally normalized and the semiring has a welldefined Kleene closure. This follows from the fact that the only operations used by the algorithm are addition and multiplication if we use Lehmann's
(1977) algorithm for the computation of left-corner expectations, Elc. The definitions, derivation, and proof of this statement can be found in App. B.
## 6 Conclusion
In this paper, we have shown how to efficiently compute prefix probabilities for PCFGs in CNF,
adapting Jelinek–Lafferty to use additional memoization, thereby reducing the time complexity from O(N3|N |3+|N |4) to O(N2|N |3+N3*|N |*2). We thereby addressed one of the main limitations of the original formulation, of being slow for large grammar sizes.
## 7 Limitations
While we have improved the asymptotic running time of a classic algorithm with regard to grammar size, the time complexity of our algorithm is still cubic in the length of the input. Our result follows the tradition of dynamic programming algorithms that trade time for space by memoizing and reusing pre-computed intermediate results. The usefulness of this trade-off in practice depends on the specifics of the grammar, and while the complexity is strictly better in terms of non-terminals, it will be most noticeable for denser grammars with many nonterminals.
## 8 Ethics Statement
We do not foresee any ethical issues arising from this work.
## 9 Acknowledgements
We would like to thank the anonymous reviewers for their helpful comments. We would also like to thank Abra Ganz, Anej Svete, and Tim Vieira for helpful feedback on a draft of this paper.
## References
J. K. Baker. 1979. Trainable grammars for speech recognition. In Speech Communication Papers presented at the 97th Meeting of the Acoustical Society of America, pages 547–550, MIT, Cambridge, Massachusetts.
José-Miguel Benedí and Joan-Andreu Sánchez. 2007.
Fast stochastic context-free parsing: A stochastic version of the valiant algorithm. In *Pattern Recognition* and Image Analysis, pages 80–88, Berlin, Heidelberg. Springer Berlin Heidelberg.
Zhiyi Chi and Stuart Geman. 1998. Estimation of probabilistic context-free grammars. *Computational Linguistics*, 24(2):299–305.
John Cocke and J.T. Schwartz. 1970. Programming languages and their compilers: Preliminary notes.
Courant Institute of Mathematical Sciences, New York University.
Shay B. Cohen, Giorgio Satta, and Michael Collins.
2013. Approximate PCFG parsing using tensor decomposition. In *Proceedings of the 2013 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 487–496, Atlanta, Georgia. Association for Computational Linguistics.
A. Corazza, R. De Mori, R. Gretter, and G. Satta.
1994. Optimal probabilistic evaluation functions for
search controlled by stochastic context-free grammars. *IEEE Transactions on Pattern Analysis and* Machine Intelligence, 16(10):1018–1027.
Chris Dyer. 2017. Should neural network architecture reflect linguistic structure? In *Proceedings of* the 21st Conference on Computational Natural Language Learning (CoNLL 2017), page 1, Vancouver, Canada. Association for Computational Linguistics.
Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209, San Diego, California.
Association for Computational Linguistics.
Jay Earley. 1970. An efficient context-free parsing algorithm. *Communications of the ACM*, 13(2):94–102.
Jacob Eisenstein. 2019. *Introduction to Natural Language Processing*. Adaptive Computation and Machine Learning series. MIT Press.
Jason Eisner and John Blatz. 2007. Program transformations for optimization of parsing algorithms and other weighted logic programs. In *Proceedings of* FG 2006: The 11th Conference on Formal Grammar, pages 45–85. CSLI Publications.
Robert W. Floyd. 1962. Algorithm 97: Shortest path.
Communications of the ACM, 5(6):345.
Susan L. Graham, Michael Harrison, and Walter L.
Ruzzo. 1980. An improved context-free recognizer.
ACM Transactions on Programming Languages and Systems, 2(3):415–462.
Frederick Jelinek and John D. Lafferty. 1991. Computation of the probability of initial substring generation by stochastic context-free grammars. *Computational* Linguistics, 17(3):315–353.
Tadao Kasami. 1965. An efficient recognition and syntax-analysis algorithm for context-free languages. In Technical Report, Air Force Cambridge Research Lab, Bedford, MA.
Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. 2019. Unsupervised recurrent neural network grammars. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 1105–1117, Minneapolis, Minnesota. Association for Computational Linguistics.
Lillian Lee. 1997. Fast context-free parsing requires fast Boolean matrix multiplication. In 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 9–
15, Madrid, Spain. Association for Computational Linguistics.
M. C. J. Leermakers, A. Augusteijn, and F.E.J. Kruseman Aretz. 1992. A functional LR parser. *Theoretical Computer Science*, 104(2):313–323.
Daniel J. Lehmann. 1977. Algebraic structures for transitive closure. *Theoretical Computer Science*,
4(1):59–76.
Robert C. Moore. 2000. Time as a measure of parsing efficiency. In *Proceedings of the COLING-2000 Workshop on Efficiency In Large-Scale Parsing Systems*,
pages 23–28, Centre Universitaire, Luxembourg. International Committee on Computational Linguistics.
Andreas Opedal, Ran Zmigrod, Tim Vieira, Ryan Cotterell, and Jason Eisner. 2023. Efficient semiringweighted Earley parsing. In *Proceedings of the 61st* Annual Meeting of the Association for Computational Linguistics (ACL), Toronto, Canada.
Bernard Roy. 1959. Transitivité et connexité. Comptes rendus hebdomadaires des séances de l'Académie des sciences, 249:216–218.
Grzegorz Rozenberg and Arto Salomaa, editors. 1997.
Handbook of Formal Languages, Vol. 1: Word, Language, Grammar. Springer-Verlag, Berlin, Heidelberg.
Joan-Andreu Sánchez and José-Miguel Benedí. 1997.
Computation of the probability of the best derivation of an initial substring from a stochastic context-free grammar. Proceedings of the VII Spanish Symposium on Pattern Recognition and Image Analysis, pages 181–186.
Andreas Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. *Computational Linguistics*, 21(2):165– 201.
Leslie G. Valiant. 1975. General context-free recognition in less than cubic time. *Journal of Computer* and System Sciences, 10(2):308–315.
Stephen Warshall. 1962. A theorem on boolean matrices. *Journal of the ACM*, 9(1):11–12.
Daniel H. Younger. 1967. Recognition and parsing of context-free languages in time n 3. Information and Control, 10(2):189–208.
## A Proof Of Lemma 1
Lemma 1. Given a tight, trim PCFG in CNF and a string w = w1 · · · wN *, the prefix probability of a* substring wi· · · wk of w*, can be defined recursively as follows:*
$$\begin{array}{l}{{p_{\pi}(i,k\mid\mathrm{X})=\sum_{\mathrm{Y,Z\in\mathcal{N}}}E_{\mathrm{lc}}(\mathrm{Y\,Z\mid X})}}\\ {{\mathrm{~}}}\\ {{\cdot\sum_{j=i}^{k-1}\beta(i,j\mid\mathrm{Y})\cdot p_{\pi}(j+1,k\mid\mathrm{Z})}}\end{array}$$
$$(10)$$
Proof. Given the PCFG is in CNF and *k > i*, in order to derive the prefix wi*· · ·* wk we must first apply some rule X −→ Y Z, where the first part of the substring is then derived from Y and the remainder (and potentially more) from Z:
$$p_{\pi}(i,k\mid\mathrm{X})=\sum_{\mathrm{Y},\mathrm{Z}\in\mathcal{N}}p(\mathrm{X}\to\mathrm{Y}\,\mathrm{Z})\,\cdot\left[\sum_{j=i}^{k-1}\beta(i,j\mid\mathrm{Y})\cdot p_{\pi}(j+1,k\mid\mathrm{Z})+p_{\pi}(i,k\mid\mathrm{Y})\right]$$
$$(16)$$
$$(17)$$
$$(18)$$
(16)
where the last term, pπ(*i, k* | Y), handles the case where the whole prefix is derived from Y alone.
This term is clearly recursively defined through Eq. (16), with X replaced by Y. Defining R(Y, Z) def P =
k−1 j=i β(i, j | Y) · pπ(j+1, k | Z), we can rewrite Eq. (16) as:
$$p_{\pi}(i,k\mid\mathrm{X})=\sum_{\mathrm{Y,Z\in\mathcal{N}}}p(\mathrm{X}\to\mathrm{Y}\,\mathrm{Z})\cdot R(\mathrm{Y,Z})+\sum_{\mathrm{A,B\in\mathcal{N}}}p(\mathrm{X}\to\mathrm{A}\,\mathrm{B})\cdot p_{\pi}(i,k\mid\mathrm{A})$$
After repeated substitutions ad infinitum, we get:
$$p_{\pi}(i,k\mid\mathrm{X})=\sum_{\mathrm{A,B\in\mathcal{N}}}p(\mathrm{X}\ \stackrel{\pi}{\Rightarrow}\mathrm{A}\,\mathrm{B})\cdot\sum_{\mathrm{Y,Z\in\mathcal{N}}}p(\mathrm{A}\to\mathrm{Y}\,\mathrm{Z})\cdot R(\mathrm{Y},\mathrm{Z})$$
Note that, in the last step, infinite derivations do not carry any probability mass since we assumed the PCFG to be tight and trim. Hence, the final form of the equation is:
$$p_{\pi}(i,k\mid\mathrm{X})=\sum_{\mathrm{A,B\in\mathcal{N}}}p(\mathrm{X}\ \mathop{\rightleftharpoons}^{\mathrm{s}}\mathrm{A}\,\mathrm{B})\sum_{\mathrm{Y,Z\in\mathcal{N}}}p(\mathrm{A}\to\mathrm{Y}\,\mathrm{Z})\cdot R(\mathrm{Y},\mathrm{Z})$$ $$=\sum_{\mathrm{Y,Z\in\mathcal{N}}}E_{\mathrm{lc}}(\mathrm{Y}\,\mathrm{Z}\mid\mathrm{X})\cdot\sum_{j=i}^{k-1}\beta(i,j\mid\mathrm{Y})\cdot p_{\pi}(j+1,k\mid\mathrm{Z})$$
■
## B Extension Of Algorithm 3 **To Semirings**
In the following, we give the necessary background on semirings and then show how the algorithms introduced above can be framed in terms of semirings. We start by introducing the necessary definitions and notation.
Definition 11. A **monoid** is a 3-tuple ⟨A, ◦, 1⟩ *where:*
$$,c\in{\mathcal{A}},(a\circ b)\circ c=a\circ(b\circ c);$$
(i) A *is a non-empty set;*
(ii) ◦ is a binary operation which is associative: ∀a, b, c ∈ A,(a ◦ b) ◦ c = a ◦ (b ◦ c);
(iii) 1 is a left and right identity element: ∀a ∈ A, 1 ◦ a = a ◦ 1 = a
(iv) A is closed under the operation ◦: ∀a, b ∈ A, a ◦ b ∈ A
A monoid is **commutative** if ∀a, b ∈ A : a ◦ b = b ◦ a.
Definition 12. A **semiring** is a 5-tuple W = ⟨A, ⊕, ⊗, 0, 1⟩*, where*
(i) ⟨A, ⊕, 0⟩ is a **commutative monoid** over A with identity element 0 *under the* addition *operation* ⊕;
(ii) ⟨A, ⊗, 1⟩ is a **monoid** over A with identity element 1 *under the* multiplication *operation* ⊗;
(iii) Multiplication is **distributive** over addition, that is, ∀a, b, c ∈ A:
- a ⊗ (b ⊕ c) = a ⊗ b ⊕ a ⊗ c;
- (b ⊕ c) ⊗ a = b ⊗ a ⊕ c ⊗ a.
(iv) 0 is an **annihilator** for A, that is, ∀a ∈ A, 0 ⊗ a = a ⊗ 0 = 0.
A semiring is **idempotent** if ∀a ∈ A : a ⊕ a = a.
Definition 13. A semiring W = ⟨A, ⊕, ⊗, 0, 1⟩ is **complete** *if it is possible to extend the addition operator* ⊕ *to infinite sums, maintaining the properties of associativity, commutativity, and distributivity from the* finite case (Rozenberg and Salomaa, 1997, Chapter 9). In this case, we can define the unary operation of the **Kleene star** denoted by a superscript ∗ as the infinite sum over its operand, that is, ∀a ∈ A:
$$\otimes a=a\otimes\mathbf{0}=\mathbf{0}.$$
$$a^{*}\stackrel{{\mathrm{def}}}{{=}}\bigoplus_{i=0}^{\infty}a^{i}$$
Analogously to Eq. (13), it then follows that:
$$a^{*}=\bigoplus_{i=0}^{\infty}a^{i}=a^{0}\oplus\bigoplus_{i=1}^{\infty}a^{i}={\bf1}\oplus a\otimes\bigoplus_{i=0}^{\infty}a^{i}={\bf1}\oplus a\otimes a^{*}$$
$$(19)$$
$$(20)$$
$$(21)$$
∗(20)
$${\mathrm{and,~similarly:}}$$
$$a^{*}=a^{0}\oplus\bigoplus_{i=1}^{\infty}a^{i}=\mathbf{1}\oplus\bigoplus_{i=a}^{\infty}a^{i}\otimes a=\mathbf{1}\oplus a^{*}\otimes a$$
$$(22)^{\frac{1}{2}}$$
We now discuss how complete semirings can be lifted to matrices. The definitions follow analogously
to matrices over the reals.
Definition 14. We define **semiring matrix addition** as follows. Let A and B be d × d *matrices whose*
entries are elements from a complete semiring W = ⟨A, ⊕, ⊗, 0, 1⟩. Then the sum ("+") of A and B is
defined as:
$$(A+B)_{i j}\stackrel{\mathrm{def}}{=}A_{i j}\oplus B_{i j}\qquad\qquad i,j\in1,\ldots,d$$
def = Aij ⊕ Bij i, j ∈ 1*, . . . , d* (22)
Definition 15. We define **semiring matrix multiplication** as follows. Let A and B be d × d matrices whose entries are elements from a complete semiring W = ⟨A, ⊕, ⊗, 0, 1⟩. Then the product ("·*") of* A
and B *is defined as:*
$$(A\cdot B)_{i j}\stackrel{\mathrm{def}}{=}\bigoplus_{k=1}^{d}A_{i k}\otimes B_{k j}\qquad\qquad i,j\in1,\ldots,d$$
$$(23)$$
We also define the **zero matrix**, O, over the complete semiring W = ⟨A, ⊕, ⊗, 0, 1⟩, such that all entries are 0, and the **unit matrix** I as (I)ij = 1 iff i = j and 0 otherwise for all indices i, j ∈ 0*, . . . , d*.
It is then straightforward to show that matrix addition is associative and commutative while matrix multiplication is associative and distributive over matrix addition. Hence, ⟨Wd×d, +, ·, O, I⟩ is a semiring.
Furthermore, by the element-wise definition of its addition operation, it is also complete.
We now consider a semiring-weighted CFG G = ⟨N , Σ, S, R*, p,* W⟩, where N , Σ, S, R are defined as before, except the (locally normalized) weighting function p is now semiring-valued:
$$p:{\mathcal{R}}\to{\mathcal{W}}{\mathrm{~such~that~}}\bigoplus_{X\to\alpha\in{\mathcal{R}}}p(X\to\alpha)=1$$
As before, we define the matrix P as the square matrix of dimension *|N |* whose rows and columns are indexed by the non-terminals N in some fixed order so that the entry Pij corresponds to p(Xi −→
Xj•) = L
Y∈N
p(Xi −→ XjY). We can then calculate the probability of getting Xj from Xi at the leftmost
$$\mathbf{\Phi}(\mathbf{\Phi})_{i j}=\left(\bigotimes_{i=0}^{k}P\right)_{i j}.{\mathrm{~Note~t}}$$
non-terminal after exactly k derivation steps as (P
. Note that this holds because the production rule weights are locally normalized, meaning that we only need to consider the left-most rule applications instead of having to explicitly calculate the full treesum.
Finally, to get the left-corner expectations, we then need to calculate the Kleene closure over the matrix P,
7that is, we want to find P∗ =L∞
k=0 P
k. To compute the Kleene closure over the transition matrix we can use an efficient algorithm by Lehmann (1977) which is a generalization of the well-known shortest-path algorithm usually attributed to Floyd (1962) and Warshall (1962), but introduced previously by Roy (1959). The algorithm works under the condition that the Kleene closure of all individual matrix entries from semiring W exists, which is true for our case since we assumed W to be complete. The algorithm is shown in Algorithm 4.
Algorithm 4 Lehmann's algorithm for computing the Kleene closure over a transition matrix 1: def Lehmann(M):
2: d ←− dim(M) ▷ M is a d × d *matrix over a complete semiring* 3: M(0) ←− M
4: for j = 1*, . . . , d* :
5: for i = 1*, . . . , d* :
6: for k = 1*, . . . , d* :
7: M
(j)
ik ←− M
(j−1)
ik ⊕ M
(j−1)
ij ⊗
M
(j−1)
jj ∗⊗ M
(j−1)
jk 8: **return** I ⊕ M(d)
With this, we can now generalize our prefix probability algorithm to semirings, as shown in Algorithm 5.
Proposition 4. The semiring-weighted version of our algorithm runs in O(N2|N |3 + N3*|N |*2).
Proof. Lehmann's algorithm, as presented in Algorithm 4, has three nested for loops of d iterations each, where d is the dimension of the input matrix. In our case, d is the number of non-terminals, *|N |*. Assuming 7Note that the Kleene closure exists since matrices with elements from a complete semiring are complete.
Algorithm 5 Faster prefix probability algorithm over semirings
| 1: def FastSemiringJL(w = w1 · · · wN , G): 2: β ←− CKY(w) | ▷ Precompute β with Algorithm 1 | |
|--------------------------------------------------------------|-----------------------------------|-----------------------------------|
| 3: | P ∗ ←− Lehmann(P) | ▷ Precompute P ∗ with Algorithm 4 |
| 4: | for Xi , Xj ∈ N : | ▷ Precompute Elc(Xj | Xi) |
| 5: | Elc(Xj | Xi) ←− (P ∗ ) ij | |
| 6: | for i, j = 1, . . . , N : | |
| 7: | for X, Z ∈ N : | ▷ Precompute γ by Eq. (15e) |
| 8: | γij (X, Z)←− L | p(X−→YZ) ⊗ β(i, j | Y) |
| Y∈N | | |
| 9: | for X, Z ∈ N : | ▷ Precompute δ by Eq. (15f) |
| 10: | δij (X, Z)←− L | Elc(Y | X) ⊗ γij (Y, Z) |
| Y∈N | | |
| 11: | for k ∈ 1, . . . , N : | |
| 12: | for X ∈ N : | ▷ Base case |
| 13: | pπ(k, k | X)←− L | Elc(Y | X) ⊗ p(Y−→wk) |
| Y∈N | | |
| 14: | for ℓ ∈ 2 . . . N : | |
| 15: | for i ∈ 1 . . . N − ℓ + 1 : | |
| 16: | k ←− i + ℓ − 1 | |
| 17: | for X, Z ∈ N : | ▷ Recursively compute pπ |
| k L−1 | | |
| 18: | pπ(i, k | X) ←− pπ(i, k | X) ⊕ | δij (X, Z) ⊗ pπ(j+1, k | Z) |
| j=i | | |
| 19: | return pπ | |
the Kleene closure of elements in W can be evaluated in O(1), this means that computing the left corner expectations in lines 3-5 of Algorithm 5 takes O(*|N |*3), as before. Hence, the complexity of the overall algorithm remains unchanged, that is, we can compute the prefix probabilities under a semiring-weighted, locally normalized CFG G in O(N2|N |3 + N3*|N |*2). ■
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
As this is a theoretical result about a runtime improvement of an algorithm, we were unable to identify any risks from this work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✗ **Did You Run Computational Experiments?** Left Blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
gonzalez-gutierrez-etal-2023-analyzing | Analyzing Text Representations by Measuring Task Alignment | https://aclanthology.org/2023.acl-short.7 | Textual representations based on pre-trained language models are key, especially in few-shot learning scenarios. What makes a representation good for text classification? Is it due to the geometric properties of the space or because it is well aligned with the task? We hypothesize the second claim. To test it, we develop a task alignment score based on hierarchical clustering that measures alignment at different levels of granularity. Our experiments on text classification validate our hypothesis by showing that task alignment can explain the classification performance of a given representation. | # Analyzing Text Representations By Measuring Task Alignment
Cesar Gonzalez-Gutierrez, Audi Primadhanty, Francesco Cazzaro, Ariadna Quattoni Universitat Politècnica de Catalunya, Barcelona, Spain
{cesar.gonzalez.gutierrez, audi.primadhanty, francesco.cazzaro}@upc.edu, [email protected]
## Abstract
Textual representations based on pre-trained language models are key, especially in few-shot learning scenarios. What makes a representation good for text classification? Is it due to the geometric properties of the space or because it is well aligned with the task? We hypothesize the second claim. To test it, we develop a task alignment score based on hierarchical clustering that measures alignment at different levels of granularity. Our experiments on text classification validate our hypothesis by showing that task alignment can explain the classification performance of a given representation.
## 1 Introduction
Recent advances in text classification have shown that representations based on pre-trained language models are key, especially in few-shot learning scenarios (Ein-Dor et al., 2020; Lu et al., 2019). It is natural to ask: What makes a representation good for text classification in this setting? Is the representation good due to intrinsic geometric properties of the space or because it is well *aligned* with the classification task? The goal of this paper is to answer this question to better understand the reason behind the performance gains obtained with pre-trained representations.
Our hypothesis is that representations better aligned with class labels will yield improved performance in few-shot learning scenarios. The intuition is simple: in this setting, the limited number of labeled samples will only provide a sparse coverage of the input domain. However, if the representation space is properly aligned with the class structure, even a small sample can be representative. To illustrate this, take any classification task. Suppose we perform clustering on a given representation space that results in a few pure clusters (with all samples belonging to the same class). Then, any training set that 'hits' all the clusters can be representative. Notice that there is a trade-off between the number of 70
![0_image_0.png](0_image_0.png)
clusters and their purity. A well-aligned representation is one for which we can obtain a clustering with a small number of highly pure clusters. Based on this, we propose a task alignment score based on hierarchical clustering that measures alignment at different levels of granularity: Task Hierarchical Alignment Score (THAS).
To test our hypothesis that task alignment is key we conduct experiments on several text classification datasets comparing different representations.
Our results show that there is a clear correlation between the THAS of a representation and its classification performance under the few-shot learning scenario, validating our hypothesis and showing that task alignment can explain performance. In contrast, our empirical study shows that intrinsic geometric properties measured by classical clustering quality metrics fail to explain representation performance in the few-shot learning scenario.
Our study suggests an answer to our main question: A good efficient representation (i.e. one that enables few-shot learning) is a representation that induces a good alignment between latent input structure and class structure. Our main contributions are: 1) We develop a score based on hierarchical clustering (§2) that measures the extent to which a representation space is aligned with a given class structure and 2) We conduct an empirical study using several textual classification datasets
(§3) that validates the hypothesis that the best representations are those with a latent input structure that is well aligned with the class structure.
## 2 Task Hierarchical Alignment Score
We now present the Task Hierarchical Alignment Score (THAS) designed to measure the alignment between a textual representation and the class label for a given task. The idea is quite simple, in a good representation space, points that are close to each other should have a higher probability of belonging to the same class. Therefore, we could perform clustering of the points and obtain *high* purity clusters, where most points belong to the same class. We assume that we are given: a dataset S = {(xi, yi)}
n i=1 of n labeled data points where x ∈ X is a text fragment and y ∈ Y its corresponding class label (e.g., a sentiment classification label)
and a representation function r : X → R
d mapping points in X to a d-dimensional representation space R
d(e.g., a sparse bag-of-words).
Our goal is to compute a metric τ (*S, r*) that takes some labeled domain data and a representation function and computes a real value score.
Fig. 1 illustrates the steps involved in computing THAS. There are three main steps: 1) hierarchical clustering, 2) computing clustering partition alignments, and 3) computing the aggregate metric.
In the first step, we compute the representation of each point and build a data dendrogram using hierarchical clustering. The data dendrogram is built by merging clusters, progressively unfolding the latent structure of the input space. Traversing the tree, for each level we get a partition of the training points into k clusters. In step 2, for each partition, we measure its alignment with the class label distribution producing an alignment curve as a function of k. Finally, we report the area under this curve.
Algorithm 1 summarizes the whole procedure. Implementation details and performance information can be found in A.1.
## 2.1 Hierarchical Clustering
In the first step, we will consider the input points X = {xi| (xi, yi) ∈ S} and the representation function r to obtain a representation of all points R = {r(xi) | xi ∈ X}.
We then apply Hierarchical Clustering (HC) to the points in R obtaining a dendrogram D =
HC(R) = {Pk}
n k=1 that defines a set of n cluster partitions. Fig. 1 (left) shows a diagram of a
Algorithm 1: THAS
Input: Dataset S = {(xi, yi)}
n i=1, representation function r Output: τ (*S, r*)
1 Get representation:
R = {r(xi) | xi ∈ X}
2 Run Hierarchical Clustering:
D = HC(R) = {Pk}
n k=1 3 Traverse the dendrogram:
foreach partition Pk ⊂ D do 4 Predict scores for all points:
foreach *point* xi ∈ X in i = 1*, . . . , n* where r(xi) ∈ C ⊂ Pk do 5 Label prediction scores:
foreach y′j ∈ Y in j = 1, . . . , |Y| do Yˆk,i,j = s(xi, y′j
)
$$=1,\ldots,n$$
6 Partition alignment score:
a(Pk) = AUCy+ (Yˆk,Y )
7 Final aggregate metric:
τ (*S, r*) = 1n Pn k=1 a(Pk)
dendrogram. The root of this tree is the whole set and, at the leaves, each point corresponds to a singleton. At intermediate levels, top-down branching represents set splitting.
For each level k = 1*, . . . , n* of the dendrogram there is an associated clustering partition of the input points into k clusters Pk = {Cj}
k j=1. That is, for any particular level we have a family of k nonempty disjoint clusters that cover the representation R =Sk j=1 Cj , where each representation point r(x) ∈ R is assigned to one of the k clusters.
## 2.2 Partition Alignment Score
We use the gold labels Y = {yi| (xi, yi) ∈ S} to compute an alignment score a(Pk) for each partition Pk ⊂ D. We compute it in two parts.
First, for every point x ∈ X and label y′ ∈ Y
we compute a label probability score by looking at the gold label distribution of the cluster C to which the point belongs in the clustering partition:
$$s(\mathbf{x},y^{\prime})={\frac{1}{|C|}}\#[y^{\prime}\in C]\qquad\qquad(1)$$
where \#[y′ ∈ C] is the number of samples in cluster C with gold label y′. Intuitively, this assigns to a point x a label probability that is proportional to the distribution of that label in the cluster C.
Second, we use the label probability scores of all points Yˆk = {s(xi, y′j
) | xi ∈ X, y′j *∈ Y}* and the 71
| Repr. | ALC | THAS | ADBI | | | | | | | | | | | | |
|----------|-------|--------|--------|-----|-----|-----|-----|-----|-----|-----|------|------|------|------|------|
| IM | WT | CC | S1 | µ | IM | WT | CC | S1 | µ | IM | WT | CC | S1 | µ | |
| BERTall | .84 | .50 | .32 | .79 | .61 | .84 | .67 | .27 | .75 | .63 | 2.87 | 3.03 | 3.31 | 3.25 | 3.11 |
| GloVe | .80 | .48 | .26 | .74 | .57 | .80 | .63 | .26 | .73 | .60 | 2.62 | 2.12 | 2.01 | 2.47 | 2.31 |
| BERTcls | .80 | .48 | .23 | .74 | .56 | .80 | .56 | .22 | .74 | .58 | 2.81 | 2.97 | 3.15 | 2.92 | 2.96 |
| fastText | .75 | .41 | .18 | .66 | .50 | .77 | .57 | .21 | .71 | .56 | 2.78 | 2.13 | 1.93 | 2.47 | 2.33 |
| BoW | .76 | .32 | .11 | .59 | .45 | .71 | .50 | .20 | .68 | .52 | 3.14 | 3.83 | 4.23 | 3.86 | 3.76 |
dataset gold labels Y to compute a partition alignment score. We choose as a single metric the area under the precision-recall curve (AUC) because it has the nice property that it applies to tasks with both balanced and unbalanced class distributions.1 More specifically, we compute the AUC of the target (positive) class y
+ ∈ Y of the dataset (more details in the experimental part in §3):
$$a({\mathcal{P}}_{k})=\mathrm{AUC}_{y^{+}}({\hat{Y}}_{k},Y)$$
## 2.3 Final Aggregate Metric: Thas
Once we have an alignment score for every level of the hierarchical dendrogram, we are ready to define our final Task Hierarchical Alignment Score
(THAS). Consider the alignment scoring function a applied to the partition corresponding to the lowest level of the dendrogram. The alignment score will be a(Pn) = 1 because every cluster in this partition is a singleton and therefore \#[y′ ∈ C] will be 1 for the gold label and 0 for any other label. At the other end, for the partition corresponding to the root of the dendrogram (where all points belong to a single cluster), the alignment score a(P1) is the AUC corresponding to assigning to every point x ∈ X a prediction score for each label y′ ∈ Y
equal to the relative frequency of y′in Y .
Consider now the alignment score as a function of the size of the partition. As we increase k we will get higher scores. A good representation is one that can get a high score while using as few clusters as possible. Instead of choosing a predefined level of granularity, we propose to leverage the alignment information across all levels. To achieve this, we consider the alignment score as a function of the number of clusters and measure the area under 1F1 could be a valid alternative, but this metric requires the validation of decision thresholds.
a(Pk).
2 We are ready to define our final metric:
$\pi$
$$\tau(S,r)=\frac{1}{n}\sum_{k=1}^{n}a({\mathcal{P}}_{k})\qquad\qquad(3)$$
## 3 Experimental Setup
$$(2)$$
In this section we empirically study the correlation of few-shot learning performance with 1) THAS
and 2) an unsupervised clustering quality metric.
We use four text classification datasets with both balanced and imbalanced label distributions:
IMDB (IM; Maas et al., 2011), WikiToxic (WT;
Wulczyn et al., 2017), Sentiment140 (S1; Maas et al., 2011) and CivilComments (CC; Borkan et al.,
2019).
We will compare the following representations:
a sparse bags-of-words (BoW); BERT embeddings
(Devlin et al., 2019) using two token average pooling strategies (BERTall and BERTcls); GloVe (Pennington et al., 2014); and fastText (Bojanowski et al., 2017; Joulin et al., 2016).
For further details, please refer to A.2.
## 3.1 Few-Shot Performance Vs. Thas
Since the focus of these experiments is comparing representations, we follow previous work on probing representations and use a simple model (Tenney et al., 2019; Lu et al., 2019). More precisely, we use a linear max-entropy classifier trained with l2 regularization.
To simulate a few-shot learning scenario, we create small training sets by selecting N random samples, from 100 to 1000 in increments of 100.
For each point N in the learning curve we create an 2We could consider weighting methods that neutralize uninformative areas in the curve. In particular, we could subtract the scores originating from a random clustering. However, this contribution is solely determined by the sample size and the prior distribution. As a result, it would not have any impact when comparing representations.
80%/20% 5-fold cross-validation split to find the optimal hyper-parameters. We then train a model using the full N training samples and measure its performance on the test set. We repeat the experiment with 5 random seeds and report the mean results. As the evaluation metric, we use accuracy for the balanced datasets (IMDB and Sentiment140)
and F1 for the imbalanced datasets (WikiToxic and CivilComments).
We generate learning curves for each dataset and representation (A.3). To study the correlation between task alignment and few-shot learning performance, it is useful to have a single score that summarizes the learning curve: We use the area under the learning curve (ALC). Representations with a larger ALC perform better in the few-shot learning scenario.3 We observe that BERTall is consistently the best representation followed by BERTcls and GloVe performing similarly. Representations based on word embeddings are better than the sparse baseline for all datasets, except for fastText which does not exhibit a consistent improvement.
To test for correlation, we also computed THAS
for each representation and dataset. (The corresponding curves can be found in A.3.) Since this metric is a measure of the alignment between a label distribution and an input representation, there is a THAS score per label.4In the classification tasks that we consider there is always a single target class (e.g., toxicity for WikiToxic). We measure the alignment score with respect to this class.
Table 1 summarizes the results showing ALC
(left) and corresponding THAS (center) for all representations and datasets. Overall, BERTall is the best representation for few-shot learning followed by GloVe and BERTcls. All the representations based on pre-trained word embeddings significantly outperform the baseline sparse BoW representation.
THAS predicts accurately the relative ranking between representations and the larger gap between BERTall and the rest. Fig. 2 shows a scatter plot of THAS as a function of ALC (blue dots; each point corresponds to a dataset and representation).
We compute the correlation coefficients, which are displayed in Table 2. We observe a clear positive correlation between the two metrics, providing sup-
THAS
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
| (µ)ALC vs | rp (p-value) | rs (p-value) |
|-------------|----------------|----------------|
| THAS | 0.98 (< 10−12) | 0.99 (< 10−17) |
| ADBI | 0.11 (0.62) | 0.07 (0.76) |
| µTHAS | 0.98 (0.002) | 1.0 (0.017) |
| µADBI | −0.41 (0.48) | −0.3 (0.68) |
porting evidence for our main hypothesis that a good representation under few-shot learning is a representation that is well aligned with the classification task.
## 3.2 Unsupervised Clustering Quality
We now look at standard metrics of cluster quality and test if they can explain few-shot learning performance. We use the Davies and Bouldin (1979)
index (DBI) to measure the quality of the cluster partitions at every level of the dendrogram. This metric measures the compactness of each cluster and their separation, with better cluster partitions scoring lower. Similar to the computation of THAS
described in §2, we compute DBI as a function of the number of clusters k corresponding to each level of the dendrogram. As an aggregate metric, we calculate the area under these curves to obtain a single ADBI score. (The curves are shown in A.3.)
The right side of Table 1 shows the results for the same datasets and representations used for THAS. GloVe induces the best clusters according to the ADBI metric. BERTall does not produce particularly good clusters despite being the strongest fewshot representation. Fig. 2 (red crosses) and Table 2 show that there is a low correlation between the two metrics. This suggests that the geometric properties of the clusters alone can not explain few-shot performance.
## 4 Related Work
Representation choice has recently gained significant attention from the active learning (AL) community (Schröder and Niekler, 2020; Shnarch et al.,
2022; Zhang et al., 2017). Some work has attempted to quantify what representation is best when training the initial model for AL, which is usually referred to as the cold start problem (Lu et al., 2019). The importance of word embeddings has been also studied in the context of highly imbalanced data scenarios (Sahan et al., 2021; Naseem et al., 2021; Hashimoto et al., 2016; Kholghi et al.,
2016). Most research conducted by the AL community on textual representations has focused on determining *which* representations lead to higher performance for a given task. However, our paper aims to investigate why a certain representation performs better in the few-shot scenario.
Our work, focused on examining properties of various textual representations, is closely related to recent research on evaluating the general capabilities of word embeddings. Many studies are interested in testing the behavior of such models using probing tasks that signal different linguistic skills
(Conneau et al., 2018; Conneau and Kiela, 2018; Marvin and Linzen, 2018; Tenney et al., 2019; Miaschi and Dell'Orletta, 2020). Others have targeted the capacity of word embeddings to transfer linguistic content (Ravishankar et al., 2019; Conneau et al., 2020).
Looking at approaches that analyze the properties of representations directly, without intermediate probes, Saphra and Lopez (2019) developed a correlation method to compare representations during consecutive pre-training stages. Analyzing the geometric properties of contextual embeddings is also an active line of work (Reif et al., 2019; Ethayarajh, 2019; Hewitt and Manning, 2019). While these previous works focus on analyzing representation properties independently, without considering a specific task, our study investigates the relationship between representations and task labels. We conduct a comparison between this relationship and the unsupervised analysis of representation properties.
Our work falls in line with broader research on the relationship between task and representation. Yauney and Mimno (2021) proposed a method to measure the alignment between documents and labels in a given representation space using a data complexity measure developed in the learning-theory community. In the computer vision area, Frosst et al. (2019) introduced a loss metric and investigated the entanglement of classes in the representation space during the learning process.
Zhou and Srikumar (2021) proposed a heuristic to approximate the version space of classifiers using hierarchical clustering, highlighting how representations induce the separability of class labels, thereby simplifying the classification task. In contrast, our work specifically examines the few-shot performance and emphasizes the importance of unbalanced scenarios. We find that in these more realistic situations, the choice of representation plays a critical role, paving the way for advanced strategies in active learning.
## 5 Conclusion
In this paper, we asked the question: What underlying property characterizes a good representation in a few-shot learning setting? We hypothesized that good representations are those in which the structure of the input space is well aligned with the label distribution. We proposed a metric to measure such alignment: THAS. To test our hypothesis, we conducted experiments on several textual classification datasets, covering different classification tasks and label distributions (i.e. both balanced and unbalanced). We compared a range of word embedding representations as well as a baseline sparse representation.
Our results showed that when labeled data is scarce the best-performing representations are those where the input space is well aligned with the labels. Furthermore, we showed that the performance of a representation can not be explained by looking at classical measures of clustering quality.
The main insight provided in this work could be leveraged to design new strategies in active learning. The fact that good representations induce clusters of high purity at different granularities creates opportunities for wiser exploration of the representation space in an active manner. Similar to the work of Dasgupta and Hsu (2008), we could employ the data dendrogram to guide this exploration.
## Limitations
In this paper, we focused on analyzing the properties of textual representations in the few-shot learning scenario. Its applicability to broader annotation scenarios could be presumed but is not supported by our empirical results.
Our experimental setup is based on binary classification tasks using English datasets. While our approach is general and could be easily extended to multi-class scenarios, more work would be required to extend it to other more complex structured prediction settings such as sequence tagging.
We see several ways in which this work could be extended. The most obvious extension consists of trying to generalize the notion of alignment to other tasks beyond sequence classification, such as sequence tagging. In this paper, we have used THAS to understand the quality of a given textual representation. However, since THAS is a function of a labeling and a representation, it could also be used to measure the quality of a labeling (Yan and Huang, 2018), given a fixed representation.
For example, this might be used in the context of hierarchical labeling, to measure which level of label granularity is better aligned with some input representation.
The goal of this paper was to provide an explanation for the success of pre-trained word embeddings for text classification in the few-shot learning scenario. We believe that with our proposed methodology we have successfully achieved this goal. However, it should be clear to the reader that we do not provide a method for picking the best representation, i.e. for model selection. This is because our analysis requires access to labeled data and if labeled data is available the best way to select a model will be via cross-validation.
## Acknowledgements
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 853459. The authors gratefully acknowledge the computer resources at ARTEMISA, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Física Corpuscular, IFIC (CSIC-UV). This research is supported by a recognition 2021SGR-Cat (01266 LQMC) from AGAUR (Generalitat de Catalunya).
## References
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Technical Report arXiv:1607.04606, arXiv. ArXiv:1607.04606 [cs].
Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification. *arXiv:1903.04561 [cs, stat]*.
ArXiv: 1903.04561.
Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In *Proceedings of the Eleventh International* Conference on Language Resources and Evaluation
(LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!\#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics.
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Emerging crosslingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022–
6034, Online. Association for Computational Linguistics.
Sanjoy Dasgupta and Daniel Hsu. 2008. Hierarchical sampling for active learning. In *Proceedings of the* 25th international conference on Machine learning -
ICML '08, pages 208–215, Helsinki, Finland. ACM
Press.
David Davies and Don Bouldin. 1979. A Cluster Separation Measure. *IEEE Transactions on Pattern Analysis* and Machine Intelligence, PAMI-1:224–227.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020.
Active Learning for BERT: An Empirical Study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7949–7962, Online. Association for Computational Linguistics.
Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics.
Nicholas Frosst, Nicolas Papernot, and Geoffrey Hinton.
2019. Analyzing and improving representations with the soft nearest neighbor loss. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning* Research, pages 2012–2020. PMLR.
Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E.
Oliphant. 2020. Array programming with NumPy.
Nature, 585(7825):357–362.
Kazuma Hashimoto, Georgios Kontonatsios, Makoto Miwa, and Sophia Ananiadou. 2016. Topic detection using paragraph vectors to support active learning in systematic reviews. *Journal of Biomedical Informatics*, 62:59–65.
John Hewitt and Christopher D. Manning. 2019. A
structural probe for finding syntax in word representations. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics.
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of Tricks for Efficient Text Classification. Technical Report arXiv:1607.01759, arXiv. ArXiv:1607.01759 [cs].
Mahnoosh Kholghi, Lance De Vine, Laurianne Sitbon, Guido Zuccon, and Anthony Nguyen. 2016. The Benefits of Word Embeddings Features for Active Learning in Clinical Information Extraction. In *Proceedings of the Australasian Language Technology Association Workshop 2016*, pages 25–34, Melbourne, Australia.
Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jinghui Lu, Maeve Henchion, and Brian Mac Namee.
2019. Investigating the Effectiveness of Representations Based on Word-Embeddings in Active Learning for Labelling Text Datasets. ArXiv:1910.03505 [cs, stat].
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics.
Alessio Miaschi and Felice Dell'Orletta. 2020. Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 110–119, Online. Association for Computational Linguistics.
F. Murtagh. 1983. A Survey of Recent Advances in Hierarchical Clustering Algorithms. *The Computer* Journal, 26(4):354–359.
Usman Naseem, Matloob Khushi, Shah Khalid Khan, Kamran Shaukat, and Mohammad Ali Moni. 2021.
A Comparative Analysis of Active Learning for Biomedical Text Mining. *Applied System Innovation*, 4(1):23.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems, volume 32. Curran Associates, Inc.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Vinit Ravishankar, Lilja Øvrelid, and Erik Velldal. 2019.
Probing multilingual sentence representations with X-probe. In *Proceedings of the 4th Workshop on* Representation Learning for NLP (RepL4NLP-2019),
pages 156–168, Florence, Italy. Association for Computational Linguistics.
Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B
Viegas, Andy Coenen, Adam Pearce, and Been Kim.
2019. Visualizing and Measuring the Geometry of BERT. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc.
Marko Sahan, Vaclav Smidl, and Radek Marik. 2021.
Active Learning for Text Classification and Fake News Detection. In *2021 International Symposium* on Computer Science and Intelligent Controls (ISCSIC), pages 87–94. IEEE Computer Society.
Naomi Saphra and Adam Lopez. 2019. Understanding learning dynamics of language models with SVCCA.
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3257–3267, Minneapolis, Minnesota. Association for Computational Linguistics.
Christopher Schröder and Andreas Niekler. 2020. A
Survey of Active Learning for Text Classification using Deep Neural Networks. ArXiv:2008.07267
[cs] version: 1.
Eyal Shnarch, Ariel Gera, Alon Halfon, Lena Dankin, Leshem Choshen, Ranit Aharonov, and Noam Slonim. 2022. Cluster & tune: Boost cold start performance in text classification. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7639–7653, Dublin, Ireland. Association for Computational Linguistics.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context?
Probing for sentence structure in contextualized word representations. ArXiv:1905.06316 [cs].
Joe H. Ward. 1963. Hierarchical Grouping to Optimize an Objective Function. *Journal of the American Statistical Association*, 58(301):236–244.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen,
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017.
Ex Machina: Personal Attacks Seen at Scale. In Proceedings of the 26th International Conference on World Wide Web, WWW '17, pages 1391–1399, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
Yi-Fan Yan and Sheng-Jun Huang. 2018. Cost-effective active learning for hierarchical multi-label classification. In *Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence,*
IJCAI-18, pages 2962–2968. International Joint Conferences on Artificial Intelligence Organization.
Gregory Yauney and David Mimno. 2021. Comparing text representations: A theory-driven approach.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5527–5539, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Ye Zhang, Matthew Lease, and Byron Wallace. 2017.
Active Discriminative Text Representation Learning.
Proceedings of the AAAI Conference on Artificial Intelligence, 31(1).
Yichu Zhou and Vivek Srikumar. 2021. DirectProbe:
Studying representations without classifiers. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 5070–5083, Online. Association for Computational Linguistics.
## A Appendix A.1 Thas **Implementation Details**
The data dendrogram is obtained via hierarchical agglomerative clustering. More precisely, we use a bottom-up algorithm that starts with each sample as a singleton cluster and consecutively merges clusters according to a similarity metric and merge criterion until a single cluster is formed.
We apply Ward's (1963) method, which uses the squared Euclidean distance between samples and then minimizes the total within-cluster variance by finding consecutive pairs of clusters with a minimal increase. The clustering algorithm produces a list of merges that represent a dendrogram and can be traversed to generate a clustering partition for each value of k. It was implemented using Scikit-learn
(Pedregosa et al., 2011) and NumPy (Harris et al.,
2020).
Expressed as a nearest-neighbor chain algorithm, Ward's method has a time complexity of O(n 2)
(Murtagh, 1983). THAS experiments have been performed using sub-samples of size 10K and averaged over 5 seeds. Using 32 CPUs and 16GiB
of RAM, each agglomerative clustering took on average 3.3 minutes. Each task alignment curve took 3 minutes on average. In contrast, DBI curves took 7.8 hours on average.
## A.2 Experimental Details
Datasets. Table 3 shows the statistics of the datasets used in this paper. They were extracted from HuggingFace Datasets (Lhoest et al., 2021).
For WikiToxic and CivilComments, we have applied a pre-processing consisting of removing all markup code and non-alpha-numeric characters.
Dataset Size Prior Task IMDB 50K 50% sentiment
WikiToxic 224K 9% toxicity Sentiment140 1.6M 50% sentiment
CivilComments 2M 8% toxic behav.
Table 3: Datasets statistics with the number of samples, target (positive) class prior, and classification task.
Representations. The following is a detailed description of the text representations used in our experiments:
BoW: this is a standard sparse term frequency bag-of-words representation.
BERTall: word embeddings from Devlin et al.'s
(2019) BERTBASE uncased model, average pooling of 2nd to last layers and average pooling of all tokens.
BERTcls: the same as above but using the [CLS]
token alone.
GloVe: Pennington et al.'s (2014) word vectors pre-trained on Common Crawl with average pooling.
fastText: word vectors from Bojanowski et al.
(2017); Joulin et al. (2016) pre-trained on Wikipedia with average pooling.
BERT representations were extracted using the HuggingFace Transformers library (Wolf et al.,
2020) implemented in PyTorch (Paszke et al.,
2019).
Models. The parameters for max-entropy learning curves were validated using 5-fold crossvalidation and the results averaged over subsamples from 5 seeds.
## A.3 Curves
Fig. 3 presents the curves used to compute the main results in §3. The left column contains the learning curves used to compute the few-shot learning performance of the different datasets and representations. The center column shows task alignment scores as a function of the number of clusters. THAS is computed as the area under these curves. The pre-trained word embeddings, in particular BERT, tend to achieve the best results. In the curves, they show higher values of alignment for a small number of clusters. The relative performance of the representations in the learning curves is paralleled in the task hierarchical alignment curves.
BERTall (i.e. using average pooling over all tokens)
seems to be superior to BERTcls (i.e. using only the [CLS] token).
The right column in Fig. 3 shows the DBI curves as a function of the number of clusters. These curves were used to compute the unsupervised clustering metric (ADBI) results presented in §3.2. As shown in the figure, these curves do not preserve the relative ranking we find in the corresponding learning curves.
![9_image_0.png](9_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations (unnumbered)
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
A.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
A.2
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
A.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3, A.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3, A.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
A.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
khare-etal-2023-tracing | Tracing Linguistic Markers of Influence in a Large Online Organisation | https://aclanthology.org/2023.acl-short.8 | Social science and psycholinguistic research have shown that power and status affect how people use language in a range of domains. Here, we investigate a similar question in a large, distributed, consensus-driven community with little traditional power hierarchy {--} the Internet Engineering Task Force (IETF), a collaborative organisation that designs internet standards. Our analysis based on lexical categories (LIWC) and BERT, shows that participants{'} levels of influence can be predicted from their email text, and identify key linguistic differences (e.g., certain LIWC categories, such as {``}WE{''} are positively correlated with high-influence). We also identify the differences in language use for the same person before and after becoming influential. |
## Tracing Linguistic Markers Of Influence In A Large Online Organisation
Prashant Khare∗, Ravi Shekhar†, Mladen Karan∗**, Stephen McQuistin**‡,
Colin Perkins‡, Ignacio Castro∗, Gareth Tyson∗§, Patrick G.T. Healey∗**, Matthew Purver**∗¶
∗Queen Mary University of London, †University of Essex, ‡University of Glasgow
§Hong Kong University of Science & Technology, ¶Jožef Stefan Institute
{p.khare, m.karan, i.castro, g.tyson, p.healey, m.purver}@qmul.ac.uk, [email protected], [email protected], [email protected]
## Abstract
Social science and psycholinguistic research have shown that power and status affect how people use language in a range of domains.
Here, we investigate a similar question in a large, distributed, consensus-driven community
- the Internet Engineering Task Force (IETF),
a collaborative organisation that develops technical standards for the Internet. Our analysis, based on lexical categories (LIWC) and BERT, shows that participants' levels of influence can be predicted from their email text, and identifies key linguistic differences (e.g., certain LIWC categories, such as WE are positively correlated with high-influence). We also identify the differences in language use for the same person before and after becoming influential 1.
## 1 Introduction And Related Work
Motivation Online communities are rapidly growing. It is imperative to study them to gain a better understanding of online dynamics and important processes such as decision-making. Prior work has shown that influence is an important aspect to consider while analysing online community dynamics (Bapna and Umyarov, 2015; Vega et al.,
2021). Social and psycholinguistic research has also revealed that a person's power and status (i.e.,
influence) is reflected in their usage of language
(Nguyen et al., 2016; Guinote, 2017). In this paper, we focus on linguistic traits exhibited by influential people in a large online community.
Detecting meaningful domain-independent indicators of influence is difficult (Danescu-NiculescuMizil et al., 2012). Instead, we focus on the Internet Engineering Task Force2(IETF) - a large, open, voluntary, standards developing organisation with over 2M emails between 56k participants over 1Code: https://github.com/sodestream/
acl2023-tracing-linguistic-markers 2IETF is responsible for producing technical standards for internet infrastructure. https://www.ietf.org/
20 years. The decentralised, consensus-oriented nature of the IETF makes it an interesting case study for two reasons. First, compared to the social media data commonly used in similar studies (e.g.
Tchokni et al., 2014; Prabhakaran, 2015), IETF
emails are usually longer and goal-oriented. Second, the IETF is a decentralised organisation where the decision-making is collaborative and consensusdriven (Bradner, 1996; Resnick, 2014). Hence, the resulting social interactions are very different to alternative email-based datasets such as the Enron Corpus (Klimt and Yang, 2004), or interactions with more rigidly defined power distinctions e.g.,
admins/users, judges/lawyers (Danescu-NiculescuMizil et al., 2012).
Related Work Most studies of influence either focus on community structure rather than language, or use language indirectly. Urena et al. (2019) give a survey of the former approach. In an example of the latter, Prabhakaran et al. (2014) compare users with different influence in terms of their linguistic similarity or *co-adaptation*, the increasing similarity of interlocutors to each other in how they use language (see also Danescu-Niculescu-Mizil et al., 2012; Ver Steeg and Galstyan, 2013; Noble and Fernández, 2015; Kawabata et al., 2016; Buske, 2019; Healey et al., 2023). Some studies
(Bramsen et al., 2011; Gilbert, 2012) do focus on modelling influence from text of Enron emails by identifying keywords/phrases that indicate influence. Rosenthal (2014) and Tchokni et al. (2014)
extend this approach to other domains, including Twitter, Wikipedia talk pages, and debates, and include a wider range of linguistic markers.
Goals We focus on discovering linguistic markers of influence in a large consensus-driven standards developing organisation, where the consensus is based on elaborate discussions between participants on mailing lists. To complement this analysis, we also study the linguistic behaviour of participants at different hierarchical levels in IETF, as well as participants in different periods of their participation, similar to Danescu-NiculescuMizil et al. (2013), who considered the behaviour of participants as a measure of influence and claim that participants tend to echo the linguistic style of influential individuals. We map this to three research questions: **RQ1:** How do linguistic traits differ between more and less influential participants? **RQ2:** How do linguistic traits vary for participants at different levels of the organisation hierarchy? **RQ3:** *How does linguistic behaviour* of participants change as they gain influence?
## 2 Methodology
We aim to understand the correlation between influence, as defined by either network-based centrality metrics (*mail-based*) or organisational role influence (*role-based*), and language usage in terms of linguistic traits. For each participant, we consider the emails they sent in a given time period and investigate correlations of certain features of their email text with two different measures of influence.
LIWC Representation Linguistic Inquiry and Word Count (LIWC, Pennebaker et al., 2015) is a well-recognised psycholinguistic lexicon; it provides word counts for 85 different linguistic, psychological, personal concern, and informal language marker categories. Here, we aggregate the word counts within each linguistic category for each participant using the LIWC 2015 dictionary
(academic license), and normalise by the total number of emails sent by that participant. Such a normalisation is more appropriate here than normalising by total number of words written, as many IETF
emails include long technical sections. This generates a representation of a participant as their mean usage of each LIWC category; while this is a relatively reduced, low-dimensional representation of a person's language, it has the advantage of being interpretable and psychologically well-motivated.
BERT Representation The LIWC representation ignores context. To allow comparison to more advanced methods, we use the context-dependent representations from BERT (Devlin et al., 2019) via the open-source HuggingFace library (Wolf et al.,
2019). The participant-specific BERT representation is calculated by averaging the text representations (last layer CLS vectors) over all their emails.
## 3 Experimental Set-Up
Dataset The IETF is organised in Working Groups (WGs). Each WG has a technical focus (e.g., HTTP WG for the HTTP protocol) and one or more WG chairs. We use data from two public sources: the IETF mail archives3and the Datatracker4. The mail archives cover WG activities, meetings, and administration. We gathered 2,106,804 emails from 56,733 email addresses spanning 2000-2019.
To determine *mail-based* influence, we use a social graph based on mailing list interactions (messages from one person to another) as built by Khare et al. (2022). We rank participants by their eigenvector centrality, a measure of a node's influence in a graph, and transform rank to a percentile. To determine *role-based* influence, we used Datatracker for information about WG chairs and their tenure.
RQ1 (mail-based influence) We used a 5-year subset of the data for RQ1 due to the computation cost, still giving a reasonable period to observe the participation consistency in the IETF community
(McQuistin et al., 2021; Khare et al., 2022). We took data from 2015-2019 with 300,806 emails from 5,363 unique participants. This subset has 212,253 unique tokens, as opposed to 735,605 unique tokens in the whole dataset, and the median length of emails is 504. We calculate the *mailbased* influence score and LIWC representation5 for each participant as described. We fit a linear regression model using LIWC representations to predict influence percentile and observe the magnitude and directions of significant coefficients.
RQ2 (role-based influence) While *mail-based* influence was crucial to consider the activities of the participants based on the email network, *rolebased* influence is equally crucial as they are involved in organisational decision making.6 We use the same time period as in RQ1, but here we predict organisational *role-based* influence. We split the data into two categories: (a) WG chairs and (b)
participants who have never been WG chair. We 3https://mailarchive.ietf.org/
4https://datatracker.ietf.org/ - the administrative database of the IETF, containing metadata about participants and their roles, working groups, document status, etc.
5We filter out 104 ambiguous words that are present in LIWC but have technology, security, and network context meaning in IETF, using manually curated lists, for e.g., attack, argument, secure etc. We do this across all RQs.
6In the top 10% *mail-based* influential participants, less than 30% are WG chairs with significant *role-based* influence.
calculate the LIWC representations for each person, train a logistic regression model to predict category, and observe the LIWC category coefficients.
RQ3 (changes in influence) We look at participants who went from low to high influence over time: individuals who had a *mail-based* influence below the 50th percentile when they joined the IETF, and reached the top 10th percentile at some point. For each participant, we generate two different representations based on two periods - the year of joining and year of reaching the top 10th percentile for the first time - and assign these to two different classes. As in RQ2, we then train a logistic regression model to predict these classes, and examine the coefficients of the LIWC categories.
BERT-based variants Our primary purpose is not to assess the predictive power of LIWC representations, but to use them as a tool to characterise linguistic variations in a meaningful way. However, in order to understand their predictive potential, given their relatively simple nature, we compare them to BERT. For these comparisons, we use the BERT representations described in Section 2.
For each RQ we use the same experimental setup as described above. We split the data 80:20 into train and test set and train a prediction model (regression for RQ1 and classification for RQ2 &
RQ3). To experiment with both linear and nonlinear models, we include linear and logistic regression and multi layer perceptrons, using implementations from scikit-learn (Pedregosa et al., 2011)
with default parameters. As evaluation metrics we used Pearson's ρ and macro-F1 score.
## 4 Results & Discussion
We now explore the results (see Table 1 for all experiments) and answer our research questions.
## 4.1 Answers To Rqs
RQ1 - The following LIWC categories are significantly correlated (p < 0.05) with higher *mailbased* influence: WE, INFORMAL, RISK, ADJEC-TIVE, ANGER, THEY, and BIO. Categories such as NETSPEAK, SEXUAL, HEALTH, DEATH, BODY are correlated with lower influence. This suggests that influential people tend to indicate a collaborative and community-oriented approach with firstperson plural (WE) and third-person plural category
(THEY) usage. This is consistent with Kacewicz et al. (2014) and Guinote (2017), who show that influential people use more first-person plural. They also use more organisational language, which is shown by the negative correlation of informal slang language categories (NETSPEAK, SEXUAL, BODY).
We see some unexpected hidden trends due to word ambiguity (e.g., words like '*trust*' and '*live*'),
which are investigated in Section 4.2.
RQ2 - From 1, we see that working group (WG)
chairs are more social and collaborative, as is shown by WE and SOCIAL categories. This is in line with similar findings from RQ1 and also about leadership engagements from previous works
(Strzalkowski et al., 2012; Liu, 2022; Kacewicz et al., 2014; Guinote, 2017; Akstinaite et al., 2020).
Also, WG chairs use tentative statements (TENTAT)
in discussions, primarily focused on technical feedback and revisions, or suggesting alternatives. Examples showcasing the use of words such as *'or'*
and *'seems'*-
- 'seems':*"With the risk of disturbing with statements, but avoiding too many questions:This* seems against the goal of reducing headers."
- 'or': *"Question is do we need to carry around* an outer IP-in-IP header for that *or not?"*
RQ3 - From Table 1, we observe that when participants become *mail-based* influential they are likely to be more descriptive and engaged in immediate state of issues and situations as seen from the correlation of auxiliary verbs (AUXVERB), adverb, risk, and present focus (FOCUSPRESENT). They are also more involved in cognitive processes (COGPROC)
as compared to their previous self when they were new to IETF and had little influence.
## 4.2 Discussion
To better understand these LIWC categories and what kind of words play a role in the behaviour of individual categories, we calculate the frequency of words in each LIWC category as they appear in the emails. Next, we consider the top 30 most frequent words in each LIWC category and perform regression analysis on *mail-based* influence for participants, but using only these 30 words as features to generate the participant representation. We conducted this experiment separately for each LIWC
category that was significant in the first experiment.
From the word based analysis we make multiple observations. E.g., words like *'we'* imply a collective approach and is strongly correlated with the higher influence. Similarly, the use of word *'well'*
| RQ1 | High influence | BIO, WE, INFORMAL, THEY, NEGEMO, ANGER, RISK, ADJECTIVE |
|-----------------------|--------------------------------------------------------------------------|-------------------------------------------------------------------------|
| Low influence | SEXUAL, DEATH, INGEST, NETSPEAK, HEALTH, FEMALE, BODY, AFFILIATION, CONJ | |
| non-WG Chair | COGPROC, RELATIV, AFFILIATION, I, REWARD | |
| RQ3 | Top 10 percentile | ADVERB, PREP, ANGER, AUXVERB, MALE, COGPROC, ACHIEV, RISK, FOCUSPRESENT |
| Below 50th percentile | FUNCTION, PPRON, SHEHE, IPRON, NUMBER, CERTAIN, SEXUAL, INFORMAL | |
![3_image_0.png](3_image_0.png)
Table 1: LIWC categories where p < 0.05.
is standard, such as politely resuming the conversation (e.g., '*well, I agree*') or providing an approval over something (e.g., '*this works as well*'). These words are well associated with the influential participants. Otherwise, influential participants are generally not observed to be informal and other frequent words (other than *'well'*) within INFORMAL
category do not demonstrate a strong correlation with the growing influence. Also, *'well'* is the most frequent word in the INFORMAL category.
More influential people (both *mail-based* and role-based) are also observed to engage more in IETF communities. The conversations can often reflect situations where, as a part of review and feedback process, more influential people highlight limitations in protocol standards, stress on specifics, and compare with existing protocols or previous versions. Several words across different LIWC categories (RISK, NEGEMO, and ADJ) highlight such behaviour, e.g., 'problems', 'before', *'particular'*,
'specific', 'different', *'most'*, and *'than'*.
However, there are many words with dual sense, like *'trust'* which has a very technology specific usage related to network security instead of conversations involving trust issues between individuals or trust in any given situation. Similarly, the word
'live' is related with an application or network being live, instead of its conventional meaning. We also observed that some of the LIWC categories, such as BIO, did not have specific terms that could clearly establish its significance in favour of influential participants (e.g., word *'problems'* and
'trust' reflecting the significance for the category RISK), instead such categories had several words with quite weak correlation with influential participants. Such words collectively drifted the weight of the category towards influential participants.
## 4.3 Bert-Based Results
We compared the performance of the LIWC- and BERT-based models. Results in Table 2 indicate our LIWC approach is better than an intuitive BERT-based baseline. We hypothesize that the
| LIWC | BERT | | | |
|-----------------|--------|--------|--------|-------|
| LR | MLP | LR | MLP | |
| RQ1 (Pearson ρ) | 0.850∗ | 0.852∗ | -0.018 | 0.015 |
| RQ2 (Micro F1) | 91.21 | 92.46 | 87.69 | 92.21 |
| RQ3 (Micro F1) | 88.89 | 90.74 | 51.85 | 55.56 |
Table 2: LIWC vs BERT(∗ p < 0.0001.)
reason for this is that LIWC is specialised to detect linguistic markers relevant for this task. Also, to ensure fair comparison, BERT representations were not fine-tuned for the tasks. We believe combining LIWC and BERT might give better representations, especially when dealing with ambiguous words. Curiously, when observing t-SNE (Van der Maaten and Hinton, 2008) projections of participants' BERT representations (Appendix A), we find that low-influence users show a much bigger variation for relevant categories such as WE, NETS-PEAK and INFORMAL. We will investigate this in future.
## 5 Conclusions & Future Directions
This paper explores the linguistic patterns of influence in an online collaborative organisation, by analysing the differences between high- and lowinfluence participants. Using two aspects of influence - *mail-based*, derived from the email network, and organisational *role-based* - we were able to unfold several traits that differentiate influential participants from others. Many of our findings seem corroborated by studies in organisational theory. We observed that influential people exhibit more collaborative and community-oriented traits, and also stronger signs of engagement in discussions. We also observed that as people go on to become influential participants, they evolve in their communication and are seen to be more engaging and descriptive in their linguistic style.
An interesting practical application of our research is identifying and analyzing groups that are dysfunctional in terms of participant roles and their communication patterns (e.g., where the chair is not performing their role). In future work, we will extend the experiments to study these patterns of interaction in more linguistic depth, between more different roles within an organisation (possibly for multiple collaborative organisations). We will attempt to go beyond lexical count and account for word context.
## 6 Limitations
One of the main limitations is that we used the standard LIWC-based analysis approach, which is purely lexical and does not take into account the context in which a word appears. Consequently, many words that have very specific senses in the context of the IETF get miscounted as occurrences of LIWC categories. This could be addressed by a more advanced method of mapping to LIWC
categories that would account for context. Another limitation is that we manually generated a filtering list containing words specific to the IETF.
This list might not be exhaustive enough. Also, we were limited by not conducting an exhaustive hyper-parameter search on our models. We also understand that many emails are longer than 512 tokens (the input limit of the BERT model we used)
and might have not been captured completely by our BERT model. However, most of the emails do fit into this BERT sequence length limit. We did not fine tune BERT on the IETF data; this might have given better performance, although it is not clear if it would have given more insight: our main goal is not performance but analyzing/comparing characteristics of existing models. It is also worth highlighting that the data used in this work is strictly in English, and the psycholinguistic categories in LIWC are also based on English language. Hence, this study may be biased and not fully capture variations in linguistic traits that are culturally agnostic.
Ethical considerations - Participation in the IETF is bound by agreements and policies explicitly stating that mailing list discussions and Datatracker metadata will be made publicly available.7 We use only this publicly available data in our analysis. We have discussed our work with the IETF
leadership to confirm that it fits their acceptable use policies. We have also made provisions to manage the data securely, and retain it only as necessary for our work.
7See both https://www.ietf.org/about/note-well/
and the IETF privacy policy available at https://www.ietf. org/privacy-statement/.
## Acknowledgements
We thank the anonymous reviewers for their helpful comments. This work was supported by the UK EPSRC under grants EP/S033564/1 and EP/S036075/1 (Sodestream: Streamlining Social Decision Making for Enhanced Internet Standards).
Purver was also supported by the Slovenian Research Agency via research core funding for the programme Knowledge Technologies (P2-0103).
## References
Vita Akstinaite, Graham Robinson, and Eugene SadlerSmith. 2020. Linguistic markers of ceo hubris. *Journal of Business Ethics*, 167:687–705.
Ravi Bapna and Akhmed Umyarov. 2015. Do your online friends make you pay? a randomized field experiment on peer influence in online social networks.
Management Science, 61(8):1902–1920.
Scott O. Bradner. 1996. The Internet Standards Process
- Revision 3. RFC 2026.
Philip Bramsen, Martha Escobar-Molano, Ami Patel, and Rafael Alonso. 2011. Extracting social power relationships from natural language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 773–782.
Jakob A Buske. 2019. Linguistic accommodation between leaders and followers. B.S. thesis, University of Twente.
Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power:
Language effects and power differences in social interaction. In Proceedings of the 21st international conference on World Wide Web, pages 699–708.
Cristian Danescu-Niculescu-Mizil, Robert West, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013.
No country for old members: User lifecycle and linguistic change in online communities. In *Proceedings of the 22nd international conference on World* Wide Web, pages 307–318.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Eric Gilbert. 2012. Phrases that signal workplace hierarchy. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, pages 1037–1046.
Ana Guinote. 2017. How power affects people: Activating, wanting and goal seeking. *Annual review of* psychology, 68:353–381.
Patrick Healey, Prashant Khare, Ignacio Castro, Gareth Tyson, Mladen Karan, Ravi Shekhar, Stephen McQuistin, Colin Perkins, and Matthew Purver. 2023.
Power and vulnerability: Managing sensitive language in organisational communication (extended abstract). In *ST&D 2023: Annual Meeting of the* Society for Text and Discourse, June 28 - June 30, 2023, Oslo, Norway.
Ewa Kacewicz, James W Pennebaker, Matthew Davis, Moongee Jeon, and Arthur C Graesser. 2014.
Pronoun use reflects standings in social hierarchies. *Journal of Language and Social Psychology*,
33(2):125–143.
Kan Kawabata, Visar Berisha, Anna Scaglione, and Amy LaCross. 2016. A convex model for linguistic influence in group conversations. In *INTERSPEECH*,
pages 1442–1446.
Prashant Khare, Mladen Karan, Stephen McQuistin, Colin Perkins, Gareth Tyson, Matthew Purver, Patrick Healey, and Ignacio Castro. 2022. The web we weave: Untangling the social graph of the IETF.
In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 500–
511.
Bryan Klimt and Yiming Yang. 2004. The enron corpus:
A new dataset for email classification research. In European conference on machine learning, pages 217–226. Springer.
Amy H Liu. 2022. Pronoun usage as a measure of power personalization: A general theory with evidence from the chinese-speaking world. *British Journal of Political Science*, 52(3):1258–1275.
Stephen McQuistin, Mladen Karan, Prashant Khare, Colin Perkins, Gareth Tyson, Matthew Purver, Patrick Healey, Waleed Iqbal, Junaid Qadir, and Ignacio Castro. 2021. Characterising the IETF through the lens of RFC deployment. In *Proceedings of the* 21st ACM Internet Measurement Conference, pages 137–149.
Dong Nguyen, A Seza Dogruöz, Carolyn P Rosé, ˘
and Franciska De Jong. 2016. Computational sociolinguistics: A survey. *Computational linguistics*,
42(3):537–593.
Bill Noble and Raquel Fernández. 2015. Centre stage:
How social network position shapes linguistic coordination. In *Proceedings of the 6th workshop on* cognitive modeling and computational linguistics, pages 29–38.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in
Python. *Journal of Machine Learning Research*,
12:2825–2830.
James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of LIWC2015. Technical report.
Vinodkumar Prabhakaran. 2015. Social power in interactions: Computational analysis and detection of power relations. Ph.D. thesis, Columbia University.
Vinodkumar Prabhakaran, Ashima Arora, and Owen Rambow. 2014. Staying on topic: An indicator of power in political debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1481–1486.
Pete Resnick. 2014. On Consensus and Humming in the IETF. RFC 7282.
Sara Rosenthal. 2014. Detecting influencers in social media discussions. XRDS: Crossroads, The ACM
Magazine for Students, 21(1):40–45.
Tomek Strzalkowski, Samira Shaikh, Ting Liu, George Aaron Broadwell, Jenny Stromer-Galley, Sarah Taylor, Umit Boz, Veena Ravishankar, and Xiaoai Ren. 2012. Modeling leadership and influence in multi-party online discourse. In Proceedings of COLING 2012, pages 2535–2552.
Simo Editha Tchokni, Diarmuid O Séaghdha, and Daniele Quercia. 2014. Emoticons and phrases: Status symbols in social media. In Eighth International AAAI Conference on Weblogs and Social Media.
Raquel Urena, Gang Kou, Yucheng Dong, Francisco Chiclana, and Enrique Herrera-Viedma. 2019. A review on trust propagation and opinion dynamics in social networks and group decision making frameworks. *Information Sciences*, 478:461–475.
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. Journal of machine learning research, 9(11).
Lea Vega, Andres Mendez-Vazquez, and Armando López-Cuevas. 2021. Probabilistic reasoning system for social influence analysis in online social networks.
Social Network Analysis and Mining, 11(1):1–20.
Greg Ver Steeg and Aram Galstyan. 2013. Informationtheoretic measures of influence based on content dynamics. In *Proceedings of the sixth ACM international conference on Web search and data mining*,
pages 3–12.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. *arXiv preprint* arXiv:1910.03771.
## Appendix A: Bert-Based Results A
We investigated how BERT representations vary for participants, as per influence, across different significant LIWC categories. For each participant, we calculated the LIWC category representation by averaging the BERT representation of the words in that LIWC category and then projected using t-SNE. As Figures 1, 2 and 3 show, high-influence participants show less variation in their BERT representations compared to lower-influence participants, for the LIWC categories WE, NETSPEAK and INFORMAL respectively.
![6_image_0.png](6_image_0.png) ![6_image_2.png](6_image_2.png)
![6_image_1.png](6_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 6 in Limitations section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2
✓ B1. Did you cite the creators of artifacts you used?
Section 2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 2
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 2 - we used artifact(s) as they they were intended to without any modifications.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. We have used a publicly available dataset as allowed by IETF's privacy statement https://www.ietf.org/privacy-statement/
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2 LIWC Representation
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3
## C ✓ **Did You Run Computational Experiments?** Section 3
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We used default parameters for experiments without parameter tuning.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3 (default parameters)
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2 and Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
li-etal-2023-metaphor | Metaphor Detection via Explicit Basic Meanings Modelling | https://aclanthology.org/2023.acl-short.9 | One noticeable trend in metaphor detection is the embrace of linguistic theories such as the metaphor identification procedure (MIP) for model architecture design. While MIP clearly defines that the metaphoricity of a lexical unit is determined based on the contrast between its contextual meaning and its basic meaning, existing work does not strictly follow this principle, typically using the aggregated meaning to approximate the basic meaning of target words. In this paper, we propose a novel metaphor detection method, which models the basic meaning of the word based on literal annotation from the training set, and then compares this with the contextual meaning in a target sentence to identify metaphors. Empirical results show that our method outperforms the state-of-the-art method significantly by 1.0{\%} in F1 score. Moreover, our performance even reaches the theoretical upper bound on the VUA18 benchmark for targets with basic annotations, which demonstrates the importance of modelling basic meanings for metaphor detection. | # Metaphor Detection Via Explicit Basic Meanings Modelling
Yucheng Li1∗, Shun Wang2∗, Chenghua Lin2†**, Frank Guerin**1 1 Department of Computer Science, University of Surrey, UK
2 Department of Computer Science, University of Sheffield, UK
{yucheng.li, f.guerin}@surrey.ac.uk
{swang209, c.lin}@sheffield.ac.uk
## Abstract
One noticeable trend in metaphor detection is the embrace of linguistic theories such as the metaphor identification procedure (MIP) for model architecture design. While MIP clearly defines that the metaphoricity of a lexical unit is determined based on the contrast between its contextual meaning and its *basic meaning*, existing work does not strictly follow this principle, typically using the *aggregated meaning* to approximate the basic meaning of target words.
In this paper, we propose a novel metaphor detection method, which models the basic meaning of the word based on literal annotation from the training set, and then compares this with the contextual meaning in a target sentence to identify metaphors. Empirical results show that our method outperforms the state-of-theart method significantly by 1.0% in F1 score.
Moreover, our performance even reaches the theoretical upper bound on the VUA18 benchmark for targets with basic annotations, which demonstrates the importance of modelling basic meanings for metaphor detection.
## 1 Introduction
Metaphors are widely used in daily life for effective communication and vivid description. Due to their unusual and creative usage, further processes are required for machines to understand metaphors, which results in Computational Metaphor Processing (CMP), an active research direction in NLP (Rai and Chakraverty, 2020). Recent studies demonstrate that CMP can benefit a wide range of NLP tasks including creative language generation (Chakrabarty et al., 2020; Li et al., 2022b),
sentiment analysis (Li et al., 2022a), and machine translation (Mao et al., 2018). Metaphor identification, aiming to detect words used metaphorically, is the very first stage in CMP. For example, target words '*attack*' or '*defend*' in the context sentence
∗ The two authors contributed equally to this work. † Corresponding author
"He attacks/defends her point." do not literally involve *physical engagement*, so they are supposed to be identified in metaphor detection for further process (Steen et al., 2010).
Linguists, philosophers and psychologists propose various ways to define metaphors, including substitution view (Winner, 1997), comparison view
(Gentner, 1983), class inclusion view (Davidson, 1978), and conceptual metaphor theory (Lakoff and Johnson, 2008). In contrast to these theories which are relatively complex in nature, Pragglejaz
(2007) propose a simple and effective linguistic theory called Metaphor Identification Process (MIP)
which can identify metaphors in unrestricted textual corpora. MIP gains increasing popularity as it detects metaphorical terms regardless of specific conceptual mapping or comparison among source and target domain, which makes the identification operational and straightforward.
According to MIP, a word is tagged as a metaphor if its contextual meaning contrast with its
"*more basic meaning*". The basic meaning here is defined as "*more concrete; related to bodily action;*
more precise (as opposed to vague); historically older" guided by dictionaries1. For example, in the sentence "This project is such a *headache!*", the target *headache* here is metaphorical since its contextual meaning is "a thing or person that causes worry or trouble; a problem", which contrasts with the more basic meaning "a continuous pain in the head"2.
Existing deep learning methods for metaphor identification usually depend on MIP in their model design (Mao et al., 2019; Choi et al., 2021; Song et al., 2021; Li et al., 2023; Wang et al., 2023).
However, existing works usually ignore basic meaning modelling and instead use *aggregated meaning* to contrast with contextual meaning in MIP. We 91
![1_image_0.png](1_image_0.png)
call the MIP in these implementations 'Aggregated MIP' (AMIP). For example, Mao et al. (2019) and Li et al. (2023) implement MIP by contrasting contextual meaning representation with GloVe embedding and Decontextualised3 RoBERTa embedding, respectively. However, aggregated meaning representations, such as GloVe and decontextualised embeddings, are not the same as basic meanings in general. They usually represent a frequencybased weighted average of multiple word meanings. In cases where the basic meaning is the most frequent, then the aggregated meaning can be a reasonable approximation to basic meaning. However, it is very common that metaphorical meanings are more frequent so that using aggregated meaning violates the fundamental rule of MIP. For example '*back*' means 'the rear surface of the human body' as basic meaning, but its non-basic senses, e.g. '*going back*', '*back up*', '*back in 1960*', are more frequently used in corpora. This makes the aggregated representation of *back* diverge from its basic sense, so that metaphor cannot be identified via measuring contrast with contextual meaning.
A further pitfall of previous works is that the aggregated representations used are static rather than contextualised. For example, aggregated representation GloVe and Decontextualised RoBERTa embeddings used by Mao et al. (2019) and Li et al.
(2023) are both static embedding, which are not compatible with the contextual meaning they compared to and has been shown to have worse representational quality (Bommasani et al., 2020).
In this paper, we propose a novel metaphor identification mechanism, BasicMIP, which implements MIP via direct basic meaning modelling of targets.
BasicMIP explicitly leverages basic annotations from training set, where basic meaning of words are labeled as literal according to MIP theory.
First, it samples literal instances for each target. Then, the basic meaning representation of target is obtained by summing up the target embeddings of sampled literal instances. Finally, the basic representations are contrasted with their contextual meaning representation in target sentences to identify metaphors. We also present our novel metaphor detection model, BasicBERT, which not only uses BasicMIP but also inherits the AMIP
module and SPV (Selectional Preference Violation Wilks, 1975, 1978) theory from prior works.
Extensive experiments conducted on two metaphor benchmarks show that BasicBERT significantly outperforms current SOTAs. In the VUA20 benchmark, our model exceeds MelBERT
by 1% in F1 score. In the VUA18 benchmark, our performance even reaches the theoretical upper bound for the targets with literal annotations in the training set. Our code and data can be found at https://github.com/
liyucheng09/BasicBERT.
## 2 Method
BasicBERT model consists of three main components: BasicMIP, AMIP, and SPV. We include both AMIP and BasicMIP as some words do not have literal annotations in training set, so AMIP is an useful augmented component for these cases.
## 2.1 Basicmip
BasicMIP, as shown in Figure 1, is based on MIP,
in which a target word's contextualised meaning in the current context is compared with its more basic meaning. **First**, the contextual meaning representation is produced by feeding the current sentence to the RoBERTa network (Liu et al., 2019). Formally, given a sentence S = (w1, ..., wt*, ..., w*n), where wtis the target word, we obtain representations as follows:
H = RoBERTa(embcls*, ...,* embt*, ...,* embn)
$$\mathrm{b}_{\mathrm{cls}},...,\mathrm{emb}_{t},...,\mathrm{emb}_{n})\tag{1}$$
Here CLS is a special token indicating the start of an input; embiis the input embedding for word wi; and H = (hcls, ..., ht*, ..., h*n) represents the output hidden states. We denote the contextual meaning embedding of wt as vS,t = ht.
Second, to contrast the contextual meaning with the basic meaning, our model learns the basic meaning representation of the target from the training annotations. According to MIP (Steen et al., 2010),
we consider targets with literal label to represent their basic meaning. Therefore, we sample literal examples of the target wt from the training set denoted as Sb = (..., wt, ...) ∈ U, where U
is training set and Sb stands for the context sentence containing a basic usage of wt. Our model obtains the basic meaning embedding of wt by feeding Sb to a RoBERTa encoder similar to Equation 1 and get the t-th output hidden state ht. The final *contextualised* basic representation of wtis averaged among multiple literal instances, and is formulated as vB,t, which is intrinsically different to the aggregated representation of frequent meaning used in prior works.
At last, we compute a hidden vector hBMIP forBasicMIP, by concatenating vS,t and vB,t.
$$h_{\mathrm{BMIP}}=f_{0}([v_{S,t},v_{B,t}])$$
where f0(·) denotes a linear layer to learn semantic difference between vS,t and vB,t.
## 2.2 Amip And Spv
The AMIP implementation of MIP theory is inherited by our model, where contextual meaning and aggregated meaning of the target are compared.
Here the contextual target meaning embedding of wtis vS,t, the same as in Equation 2. Then, we feed the single target word wtto the RoBERTa network to derive the decontextualised vector representing the aggregated meanings of wt (Choi et al., 2021): vF,t = RoBERTa(embt).
The SPV theory is also employed which measures the incongruity between the contextual meaning of the target and its context. Similarly, the contextual target meaning embedding is vS,t, and the context sentence meaning is produced by the CLS embedding denoted as vS, where vS = hcls.
Finally, we compute AMIP (hAMIP) from the contextual and aggregated target embedding, and SPV (hSPV) from the contextual target meaning embedding and the sentence embedding.
$$\begin{array}{l c r}{{h_{\mathrm{SPV}}=f_{1}([v_{S},v_{S,t}])}}&{{}}&{{(3)}}\\ {{h_{\mathrm{AMIP}}=f_{2}([v_{S,t},v_{F,t}])}}&{{}}&{{(4)}}\end{array}$$
where f1(·) and f2(·) denote a linear layer to learn the contrast between two features.
## 2.3 Prediction
Finally, we combine three hidden vectors hAMIP,
hSPV and hBMIP to compute a prediction score yˆ,
and use binary cross entropy loss to train the overall framework for metaphor prediction.
$$\begin{array}{r l}{{\hat{y}=\sigma(W^{\top}[h_{\mathrm{BMIP}};h_{\mathrm{AMIP}};h_{\mathrm{SPV}}]+b)}}&{{}}\\ {{\mathcal{L}=-\sum_{i=1}^{N}[y_{i}\log{\hat{y}}_{i}+(1-y_{i})\log(1-{\hat{y}}_{i})]}}&{{}}\end{array}$$
## 3 Experiments
$$\left(2\right)$$
Dataset. We conduct experiments on two public bench datasets: **VUA18** (Leong et al., 2018) and VUA20 (Leong et al., 2020), which are the most popular metaphor detection benchmarks, released in the figurative language workshops of ACL in 2018 and 2020. VUA20 is an extended version of VUA18 which contains more annotations.
Baselines. RNN_ELMo (Gao et al., 2018) combined ELMo and BiLSTM as a backbone model.
RNN_MHCA (Mao et al., 2019) introduced MIP
and SPV into RNN_ELMo and capture the contextual feature within window size by multi-head attention. **RoBERTa_SEQ** (Leong et al., 2020)
is a fine-tuned RoBERTa model in the sequence labeling setting for metaphor detection. **MelBERT**
(Choi et al., 2021) realize MIP and SPV theories via a RoBERTa based model. **MrBERT** (Song et al., 2021) is the SOTA on verb metaphor detection based on BERT with verb relation encoded.
FrameBERT (Li et al., 2023) uses frame classes from FrameNet in metaphor detection and achieves SOTA performance on both VUA18 and VUA20.
Implementation details. For target words which have no literal annotations in the training set, we return the decontextualised target representation as the basic meaning vector in the BasicMIP
module. Therefore, the BasicMIP, in this situation, will degenerate to the AMIP implementation.
## 4 Results And Analysis
Overall results. Table 1 shows a comparison of the performance of our model against the baseline
Models VUA18 VUA20
Prec Rec F1 Prec Rec F1
RNN_ELMo 71.6 73.6 72.6 - - - RNN_MHCA 73.0 75.7 74.3 - - -
RoBERTa_SEQ 80.1 74.4 77.1 75.1 67.1 70.9
MrBERT 82.7 72.5 77.2 - - -
MelBERT 80.1 76.9 78.5 75.9 69.0 72.3 FrameBERT 82.7 75.3 78.8 79.1 67.7 73.0 BasicBERT 79.5 78.5 **79.0*** 73.3 73.2 **73.3*** w/o BasicMIP 81.7 75.1 78.3 74.8 69.8 72.2
models on VUA18 and VUA20. BasicBERT outperforms all baselines on both VUA18 and VUA20, including the SOTA model MelBERT by 0.5% and 1.0% in F1 score, respectively. A two-tailed t-test was conducted based on 10 paired results (with different random seeds) between BasicBERT and the strongest baseline MelBERT on both VUA18
(p = 0.022) and VUA20 (p = 0.006).
Ablation test. We also perform an ablation experiment to test the benefit of the basic modelling. As shown in Table 1, the performance of BasicBERT
drops substantially when removing basic meaning modelling (w/o BasicMIP) by 0.7% on VUA18 and 1.1% on VUA20, respectively.
Target with and without basic annotation Some target words in the test set might not have literal annotations in the training set. To better understand the mechanism of basic meaning Table 3: Contrast of features within AMIP and BasicMIP. The experiment is conducted on VUA20.
| Models | Annotation | #sample | #target | F1 | Acc |
|---------------|--------------|-----------|-----------|------|-------|
| VUA20 w/ BMIP | has literal | 18060 | 4076 | 74.7 | 91.2 |
| no literal | 4136 | 2539 | 68.2 | 86.9 | |
| w/o BMIP | has literal | 18060 | 4076 | 73.3 | 91.0 |
| no literal | 4136 | 2539 | 68.2 | 87.6 | |
| VUA18 w/ BMIP | has literal | 38825 | 3874 | 81.1 | 94.7 |
| no literal | 5122 | 2915 | 67.3 | 87.4 | |
| w/o BMIP | has literal | 38825 | 3874 | 80.7 | 94.8 |
| no literal | 5122 | 2915 | 66.5 | 88.0 | |
modelling, we test the performance of BasicBERT
on targets has and *has not* basic meaning annotations in the training data. As shown in Table 2, there are 13% of samples in the VUA18 test set for which we cannot find a corresponding basic meaning annotation from training set. This number increases to 22% for VUA20. We find BasicBERT gains significant improvement on targets with literal annotations from VUA20 via basic meaning modelling by 1.4% in F1 score. For these targets with literal annotations in the VUA18 benchmark, BasicBERT gives 81.1% in F1 score, which reaches the theoretical upper bound since the Inter-annotator agreement (IAA) value of VUA18 is around 0.8 (Leong et al., 2018) (which means further improvement might lead to overfitting).
Contrast measuring. To better compare our BasicMIP with AMIP, we conduct an experiment to directly measure the contrast between features within BasicMIP and AMIP, i.e., the contrast between the contextual and the basic meaning for BasicMIP, and the contrast between the contextual and the most frequent meaning for AMIP. Intuitively, we expect the contrast to be obvious for metaphor cases and to be slight for literal cases.
Cosine distance is used to compute the contrast between two features. The contrast will fall into
(−1, 1), smaller numbers meaning more contrasting, larger numbers meaning less contrasting.
The results (see Table 3) show that the contrast of BasicMIP features is much more obvious for metaphorical samples, and there is less contrast for literal samples compared with AMIP. Moreover, AMIP only shows a minor gap of 0.13 contrast between metaphor and literal cases. However, a significant gap of 0.89 is captured by BasicMIP
between metaphor and literal cases, which demonstrates that BasicMIP learns the difference between metaphorical and literal expressions well. In summary, the results show the effectiveness of basic meaning modelling in metaphor detection.
Case study. We perform an exploratory analysis on metaphors where BasicMIP succeeds to detect but fails without it. Prior methods might find very simple targets difficult to classify, such as see, back, hot. This is mainly because their metaphorical meanings are more frequent than their basic meanings, which leads the aggregated representations dominate by metaphorical semantics. For example, see means *look* basically. But, *I see why you are angry* and *this place has seen the war* are even more
| Modules | Metaphor | Literal |
|-------------------------|------------|-----------|
| Contextual vs. Frequent | 0.516 | 0.642 |
| Contextual vs. Basic | -0.082 | 0.809 |
frequent in language corpus. Therefore, the contrast with contextual meaning tends not to indicate metaphors anymore. On the contrary, basic meaning modelling learns their basic representation by focusing literal annotations directly, which enables BasicMIP to tackle them with high accuracy (see Appendix A for examples).
## 5 Conclusion
We proposed BasicBERT, a simple but effective approach for metaphor detection. The key feature of our method is the basic meaning modelling for metaphors from training annotations. Extensive experiments show that our model achieves best results on two benchmarks against SOTA baselines and also reaches the theoretical upper bound for instances with basic annotation. We believe our approach can be extended to other creative language with minor updates. In future, we will try apply our approach to identify other types of creative language, such as humour and sarcasm.
## 6 Limitations
This paper mainly focuses on modelling basic meaning to identify metaphors, typically learning basic meanings from literal annotations of the VUA
dataset. However, our analysis reveals that the literal annotations of the VUA dataset are incomplete, which means that some words in VUA have no literal instances annotated. Although we propose using contextual word embeddings as a backup in this paper, another promising solution for this issue might be using external resources such as dictionaries. Leveraging dictionaries is commonly used to assist manual metaphor detection, so it could also help our BasicMIP mechanism to generalise. We leave this for future work.
## References
Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020.
Interpreting pretrained contextualized representations via reductions to static embeddings. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4758–4781.
Tuhin Chakrabarty, Smaranda Muresan, and Nanyun Peng. 2020. Generating similes effortlessly like a pro: A style transfer approach for simile generation. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 6455–6469.
Minjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee, Dongwon Lee, and Jongwuk Lee.
2021. Melbert: Metaphor detection via contextualized late interaction using metaphorical identification theories. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1763–1773.
Donald Davidson. 1978. What metaphors mean. *Critical inquiry*, 5(1):31–47.
Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettlemoyer.
2018. Neural metaphor detection in context. arXiv preprint arXiv:1808.09653.
Dedre Gentner. 1983. Structure-mapping: A theoretical framework for analogy. *Cognitive science*, 7(2):155–
170.
George Lakoff and Mark Johnson. 2008. *Metaphors we* live by. University of Chicago press.
Chee Wee Leong, Beata Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja Ubale, and Xianyang Chen. 2020. A report on the 2020 vua and toefl metaphor detection shared task. In Proceedings of the Second Workshop on Figurative Language Processing, pages 18–29.
Chee Wee Leong, Beata Beigman Klebanov, and Ekaterina Shutova. 2018. A report on the 2018 vua metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 56–66.
Yucheng Li, Frank Guerin, and Chenghua Lin. 2022a.
The secret of metaphor on expressing stronger emotion. In *Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)*, pages 39–43, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Yucheng Li, Chenghua Lin, and Frank Guerin. 2022b.
Cm-gen: A neural framework for chinese metaphor generation with explicit context modelling. In *International Conference on Computational Linguistics*.
Yucheng Li, Shunyu Wang, Chenghua Lin, Frank Guerin, and Loïc Barrault. 2023. Framebert: Conceptual metaphor detection with frame embedding learning. In *Conference of the European Chapter of* the Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Rui Mao, Chenghua Lin, and Frank Guerin. 2018. Word embedding and wordnet based metaphor identification and interpretation. In *Annual Meeting of the* Association for Computational Linguistics.
Rui Mao, Chenghua Lin, and Frank Guerin. 2019. Endto-end sequential metaphor identification inspired by linguistic theories. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3888–3898.
Group Pragglejaz. 2007. Mip: A method for identifying metaphorically used words in discourse. Metaphor and symbol, 22(1):1–39.
Sunny Rai and Shampa Chakraverty. 2020. A survey on computational metaphor processing. *ACM Computing Surveys (CSUR)*, 53(2):1–37.
Wei Song, Shuhui Zhou, Ruiji Fu, Ting Liu, and Lizhen Liu. 2021. Verb metaphor detection via contextual relation learning. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4240–4251.
Gerard Steen, Lettie Dorst, J. Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A method for linguistic metaphor identification: From MIP to MIPVU.
Shunyu Wang, Yucheng Li, Chenghua Lin, Loïc Barrault, and Frank Guerin. 2023. Metaphor detection with effective context denoising. In Conference of the European Chapter of the Association for Computational Linguistics.
Yorick Wilks. 1975. A preferential, pattern-seeking, semantics for natural language inference. *Artificial* intelligence, 6(1):53–74.
Ellen Winner. 1997. *The point of words: Children's* understanding of metaphor and irony. Harvard University Press.
## A Examples Of Targets Get And **Back**
Table 4 shows cases where previous methods fails but ours successes. Corresponding sentences with basic usage of target from training set are also included. We also show word senses illustration in Figure 2 and Figure 3. The figure is drawn via RoBERTa embedding and PCA techniques. We can see the most frequent meaning of back is *'former location'* and *'travel backward'* instead of the basic meaning *'human body'*. And the meanings of get are almost equally frequent.
Yorick Wilks. 1978. Making preferences more active.
Artificial intelligence, 11(3):197–223.
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
.
.
. .
.
.
.
.
.
. .
0
![6_image_2.png](6_image_2.png)
| Target | Cases | Basic Examples |
|-----------------------------------------------------------------|------------------------------------------------------------|---------------------------------|
| we will , i 'm just saying we do wan na get into | where do you get your carrots from ? | |
| get | cocktail they 're watching neighbours come on , get up you | and you 'll get a separate room |
| lazy bugger ! oh we did n't get much further on there , what we | i 'm gon na get some cleaning , i 'll get some cleaning | |
| started with this morning. | fluid this week . | |
| back | why ca n't they take it through the back door and up | within 10 minutes i had turned my back on the corduroy battalions of trees and was striding under a |
| the stair ? | still. | |
| they are unlikely to find a place to do so which is not | on the edge of the lawn with his back to the cedar | |
| in somebody 's back yard . | tree . | |
Table 4: Cases study of targets get and *back*
| Hardware | TITAN RTX |
|---------------|-------------|
| Runtime/epoch | 50 min |
| Parameters | 252,839,426 |
Table 5: Experiment details
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
chen-etal-2023-xsim | x{SIM}++: An Improved Proxy to Bitext Mining Performance for Low-Resource Languages | https://aclanthology.org/2023.acl-short.10 | We introduce a new proxy score for evaluating bitext mining based on similarity in a multilingual embedding space: xsim++. In comparison to xsim, this improved proxy leverages rule-based approaches to extend English sentences in any evaluation set with synthetic, hard-to-distinguish examples which more closely mirror the scenarios we encounter during large-scale mining. We validate this proxy by running a significant number of bitext mining experiments for a set of low-resource languages, and subsequently train NMT systems on the mined data. In comparison to xsim, we show that xsim++ is better correlated with the downstream BLEU scores of translation systems trained on mined bitexts, providing a reliable proxy of bitext mining performance without needing to run expensive bitext mining pipelines. xsim++ also reports performance for different error types, offering more fine-grained feedbacks for model development. | # Xsim++: An Improved Proxy To Bitext Mining Performance For Low-Resource Languages
Mingda Chen∗, Kevin Heffernan∗**, Onur Çelebi, Alex Mourachko, Holger Schwenk**
{mingdachen,kevinheffernan,celebio,alexmourachko,schwenk}@meta.com Meta AI Research
## Abstract
We introduce a new proxy score for evaluating bitext mining based on similarity in a multilingual embedding space: xsim++. In comparison to xsim, this improved proxy leverages rulebased approaches to extend English sentences in any evaluation set with synthetic, hard-todistinguish examples which more closely mirror the scenarios we encounter during largescale mining. We validate this proxy by running a significant number of bitext mining experiments for a set of low-resource languages, and subsequently train NMT systems on the mined data. In comparison to xsim, we show that xsim++ is better correlated with the downstream BLEU scores of translation systems trained on mined bitexts, providing a reliable proxy of bitext mining performance without needing to run expensive bitext mining pipelines. xsim++ also reports performance for different error types, offering more fine-grained feedback for model development.
## 1 Introduction
When training neural machine translation (NMT)
systems, it has been shown in prior works that generally, the quality of such systems increases with the availability of high-quality training data
(Koehn and Knowles, 2017). However, for many low-resource languages there are few public corpora available, posing many challenges. In order to address this sparsity, one approach is to supplement existing datasets with automatically created parallel corpora, and a technique which has shown to be successful for such issues is the task of bitext mining (Schwenk et al., 2021b).
In bitext mining, the aim is to find pairs of sentences with the same sentence meaning across collections of monolingual corpora. In this work, we adopt a *global mining* approach (Schwenk et al.,
2021a), which has shown recent success in provid-
∗Equal contribution ing high-quality data for low-resourced languages
(NLLB Team et al., 2022).
In order to evaluate any bitext mining method, a natural approach is to train a NMT system on the automatically created alignments. However, this is extremely costly. As an alternative, the BUCC task (Zweigenbaum et al., 2018) offers a method for evaluating bitext mining algorithms by embedding known alignments within monolingual corpora, and then reporting on the number of correctly aligned pairs. However, this task currently only covers 5 high-resourced languages (English, French, Russian, German and Chinese), and so is not applicable to the low-resource domain. In order to address this, another approach to evaluate bitext mining is to align existing multilingual parallel test sets. Two such test sets are Tatoeba1and FLORES200.
2 However, as shown by Heffernan et al. (2022), the Tatoeba corpus is not very reliable given that for some sentence pairs there are only a few hundred sentences. Therefore, we opt to use FLORES200, which is also n-way parallel.
One existing method for evaluating bitext mining on parallel test sets is xsim.
3 This method reports the error rate of misaligned sentences, and follows a margin-based global mining approach (Artetxe and Schwenk, 2019a). However, although using xsim on test sets such as FLORES200 has been shown to be useful as a proxy metric for bitext mining
(NLLB Team et al., 2022), it has the following limitations:
1. Using FLORES200 alone has proven to not be difficult enough as for many language pairs, existing approaches quickly saturate at 0%
error (NLLB Team et al., 2022).
101
Transformation Category Original Sentence Transformed Sentence
Causality Alternation Apart from the fever and a sore throat, I
feel well and in good shape to carry out
my work by telecommuting.
Apart from the fever and a sore throat, I feel
well and in bad shape to carry out my work by
telecommuting
Entity Replacement Charles was the first member of the British
Royal Family to be awarded a degree.
M. Smith was the first member of The University to be awarded a degree.
Number Replacement Nadal bagged 88% net points in the match
winning 76 points in the first serve.
Nadal bagged 98% net points in the match winning 71 points in the sixth serve.
Table 1: Examples of the transformations applied to the English sentences from FLORES200 dev set. The red texts indicate the places of alternations.
2. As the dev and devtest sets are quite small
(997/1012 respectively), this is arguably not a good approximation for performance when mining against billions of possible candidate sentences.
3. We have observed that there is not a significant overlap in the semantics between candidate sentences, meaning that it is not possible to test difficult scenarios that arise in bitext mining when choosing between multiple (similar)
candidate pairs.
In order to address these limitations, in this work we introduce xsim++. This is an improved proxy for bitext mining performance which expands the dev and devtest sets of FLORES200 to include both more data points, and also difficult to distinguish cases which provide far greater challenges to the models. Our contributions can be summarised as follows:
1. We create a more semantically challenging and expanded English test set for FLO-RES200.
2. We validate this new test set by independently performing 110 bitext mining runs, training 110 NMT systems on the output mined bitexts, and then determining both the correlation and statistical significance between xsim++ and the resulting BLEU scores.
3. We open-source the expanded FLORES200 dev and devtest sets, and also the xsim++ code to evaluate them4.
## 2 Methodology 2.1 Background: Xsim
Given two lists of sentences in different languages, xsim seeks to align each sentence in the source 4https://github.com/facebookresearch/LASER
language to a corresponding sentence in the target language based on a margin-based5similarity
(Artetxe and Schwenk, 2019a). In doing so, xsim leverages the mining approach described in Artetxe and Schwenk (2019b) to first encode sentences into embedding vectors, assign pairwise scores between sentences in the lists, and then take the sentence in the target language that achieves the maximum score as the final prediction. xsim relies on humanannotated parallel corpora and measures the performance of bitext mining using the fraction of misaligned source sentences, i.e., error rates.
## 2.2 Xsim++
As the effectiveness of xsim is limited by the availability of parallel corpora, we choose to create xsim++ by automatically expanding the English sentences, and evaluate the sentence encoders on into-English language directions, following prior work on low-resource bitext mining (Heffernan et al., 2022). Aside from the expanded candidate set, xsim++ follows the same procedure as xsim.
xsim++ seeks to capture more subtle improvements in bitext mining by adding challenging negative examples. The examples are humanwritten sentences transformed by various operations. These operations intend to perturb semantics through minimal alternations in the surface text. In particular, we use the following categories of transformations: causality alternation, entity replacement, and number replacement. We focus on these three transformation types only as they easily allow us to create negative examples. Examples of the transformed sentences are shown in Table 1. For these transformations, we adapt the implementation in Dhole et al. (2021)
6and describe the details
| Total # | # per orig. | |
|-----------|---------------|-------|
| Original | 997 | - |
| Causality | 1868 | 1.87 |
| Entity | 37745 | 37.86 |
| Number | 3476 | 3.49 |
## Of These Transformations Below.
Causality Alternation. To alter causality in a sentence, we (1) replace adjectives with their antonyms; (2) negate the meaning of sentences by adding or removing negation function words
(e.g. "did not" and "was not") to the sentences; or
(3) leverage the negation strengthening approach
(Tan et al., 2021), which changes the causal relationships through more assertive function words
(e.g. replacing "may" with "will"). For example, as shown in Table 1 we replace "good" with the antonym "bad".
Entity Replacement. We collect candidate entities from large amounts of monolingual data. Then we replace entities in sentences with the ones randomly sampled from the candidate set. For both stages, we use the named entity recognizer from NLTK (Bird et al., 2009).
Number Replacement. We use spaCy (Honnibal and Montani, 2017) to detect dates, ordinals, cardinals, times, numbers, and percentages and then randomly replace their values.
Given the strategies above, for each sentence we create multiple transformations (i.e. negative examples) of that source sentence. For example, consider Table 1. In the "Entity Replacement" example we create a transformation by replacing two named entities. We can then continue this process by replacing these with other named entities until we have reached the desired number of total transformations7. Note that since the opportunity to change each category is dependent on the frequency of that category in the evaluation sets, some transformations occurred more than others (e.g. entities were more frequent than numbers). We summarize the data statistics for xsim++ on the FLORES200 dev 7We set a maximum threshold of 100 transformations per category per sentence.
set in Table 2. Results for the devtest set are in appendix A.
## 3 Experiment
In order to establish xsim++ as a proxy for bitext mining performance, we measure the correlation between both xsim and xsim++ error rates, and the BLEU scores resulting from NMT systems trained on mined bitexts. More specifically, for each language we choose a sentence encoder model, followed by bitext mining using each respective encoder, and then train and evaluate bilingual NMT
systems on the resulting mined bitexts. We use the FLORES200 development sets when computing the BLEU scores.
In order to validate xsim++ against varied embedding spaces, we encode (and mine) using two different multilingual encoder methods: LASER
(Artetxe and Schwenk, 2019b) and LaBSE (Feng et al., 2022). For LASER, we trained our own custom encoders (details below). For LaBSE, we used a publicly available model8as the code and data for training LaBSE are not publicly available.
We randomly choose 10 low-resource languages to perform both encoder training (if applicable) and bitext mining. The languages are: Faroese
(fao), Kabuverdianu (kea), Tok Pisin (tpi), Kikuyu
(kik), Friulian (fur), Igbo (ibo), Luxembourgish
(ltz), Swahili (swh), Zulu (zul), Bemba (bem).
Encoder Training. We trained LASER encoders using the teacher-student approach described in Heffernan et al. (2022). We choose a LASER
model (Artetxe and Schwenk, 2019b) as our teacher, and then trained specialised students for each language. In order to train each student, we used both publicly available code9and bitexts (e.g. OPUS10)
Bitext Mining. For each chosen encoder model, we perform bitext mining against approximately 3.7 billion sentences of English. For low-resource languages, the sizes of monolingual data range from 140k to 124 million. Details are in the appendix.
We make use of monolingual data available from both Commoncrawl and Paracrawl11, and operationalize the mining using the stopes library (Andrews et al., 2022).12 For LASER, we use 1.06 as the margin threshold following Heffernan et al.
(2022) and for LaBSE, we use 1.16.13 Following mining, for each language we concatenate publicly available bitexts and the mined bitext as training data for NMT bilingual models using fairseq, 14 translating from each foreign text into English. For all NMT systems, we keep the hyperparameters fixed (details in Appendix).
Evaluation. Model selection involves two use cases: comparisons within a model and across different models. For the former comparison, given our custom encoders, we choose to compare 10 checkpoints from each model.15 For cross model comparisons, we compare each chosen encoder checkpoint against another existing system. In this case, the LaBSE encoder. To quantitatively measure these two cases, we report pairwise ranking accuracy (Kocmi et al., 2021) for xsim and xsim++.
Formally, the accuracy is computed as follows
## |S(Proxy∆) = S(Mining∆) For All System Pairs| |All System Pairs|
where proxy∆ is the difference of the xsim or xsim++ scores, mining∆ is the difference of the BLEU scores, s(·) is the sign function, and *| · |* returns the cardinal number of the input.
In this work, we have 550 system pairs with 55 pairs per language direction (i.e. 11 2 pairs given 10 custom LASER encoder checkpoints + LaBSE).
We always compare systems within a language direction as the scores for system pairs across different directions are not comparable.16
## 3.1 Results
As shown in Table 3, xsim++ significantly outperforms xsim on the pairwise ranking accuracy. Additionally, when comparing the computational cost to mining, xsim++ costs over 99.9% less GPU hours and saves approximately 3 metric tons of carbon
| Metric | Accuracy | GPU hours |
|----------------------|------------|-------------|
| xsim | 35.48 | 0.43 |
| xsim++ | 72.00∗ | 0.52 |
| Mining BLEU (Oracle) | 100 | 19569 |
Table 3: Pairwise ranking accuracy along with the total number of GPU hours. For all experiments, we used NVIDIA A100 GPUs. An ∗ indicates that the result passes the significance test proposed by Koehn (2004)
with p-value < 0.05 when compared to xsim.
| Accuracy | |
|---------------------------------|-------|
| xsim++ | 72.00 |
| Causality | 63.09 |
| Entity | 65.45 |
| Number | 60.73 |
| Misaligned | 40.73 |
| Causality + Entity | 68.55 |
| Causality + Entity + Misaligned | 70.55 |
| Causality + Misaligned | 68.00 |
| Causality + Number | 66.73 |
| Causality + Number + Misaligned | 71.45 |
| Entity + Misaligned | 70.55 |
| Number + Entity | 67.45 |
| Number + Entity + Misaligned | 71.09 |
| Number + Misaligned | 64.36 |
Table 4: Pairwise ranking accuracy when using combinations of error categories. Causality=Causality Alternation, Entity=Entity Replacement, Number=Number Replacement.
emissions, but still manages to achieve a competitive accuracy. We observe similar trends for the within a model and across models use cases and report their separate accuracies in the appendix.
To better understand the contributions of each transformation category (cf. subsection 2.1) in measuring the final mining performance, we report accuracies for different combinations of categories in Table 4. In cases where an incorrect bitext alignment do does not map to any of the augmented sentences of the true alignment, we denote these as "misaligned". We find that entity replacement helps most in improving the accuracy and combing all the transformations gives the best performance.
## 4 Related Work
As xsim++ uses rule-based data augmentation, it is related to work in other areas that also employ similar data augmentation methods, such as part-ofspeech tagging (¸Sahin and Steedman, 2018), contrastive learning (Tang et al., 2022), text classification (Kobayashi, 2018; Wei and Zou, 2019),
dialogue generation (Niu and Bansal, 2018) and summarization (Chen and Yang, 2021).
## 5 Conclusion And Future Work
We proposed a proxy score xsim++ for bitext mining performance using three kinds of data augmentation techniques: causality alternation, entity replacement, and number replacement. To validate its effectiveness, we conducted large-scale bitext mining experiments for 10 low-resource languages, and reported pairwise ranking accuracies. We found that xsim++ significantly improves over xsim, doubling the accuracies. Analysis reveals that entity replacement helps most in the improvement. In future work, we plan to extend xsim++ to non-English language pairs.
## 6 Limitations
We highlight three limitations of our work. The first is that xsim++ is automatically constructed. There could be noisy sentences leading to errors that are irrelevant to the quality of encoders. The second is that xsim++ applies transformations solely to English sentences. Generalizing it to non-English language pairs requires additional research. Finally, we have experimented with the two most popular multilingual encoders: LASER and LaBSE. There are other available approaches which would be interesting to also validate xsim++ against.
## References
Pierre Andrews, Guillaume Wenzek, Kevin Heffernan, Onur Çelebi, Anna Sun, Ammar Kamran, Yingzhe Guo, Alexandre Mourachko, Holger Schwenk, and Angela Fan. 2022. stopes - modular machine translation pipelines. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing: System Demonstrations. Association for Computational Linguistics.
Mikel Artetxe and Holger Schwenk. 2019a. Marginbased parallel corpus mining with multilingual sentence embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3197–3203, Florence, Italy. Association for Computational Linguistics.
Mikel Artetxe and Holger Schwenk. 2019b. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597–610.
Steven Bird, Ewan Klein, and Edward Loper. 2009. *Natural language processing with Python: analyzing text* with the natural language toolkit. " O'Reilly Media, Inc.".
Jiaao Chen and Diyi Yang. 2021. Simple conversational data augmentation for semi-supervised abstractive dialogue summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6605–6616, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Srivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Rishabh Gupta, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S.,
Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J.
Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmanski, Tianbao Xie, Usama ´
Yaseen, M. Yee, Jing Zhang, and Yue Zhang. 2021.
Nl-augmenter: A framework for task-sensitive natural language augmentation.
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 878–891, Dublin, Ireland. Association for Computational Linguistics.
Kevin Heffernan, Onur Çelebi, and Holger Schwenk.
2022. Bitext mining using distilled sentence representations for low-resource languages. Findings of EMNLP.
Matthew Honnibal and Ines Montani. 2017. spaCy 2:
Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear.
Sosuke Kobayashi. 2018. Contextual augmentation:
Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452–457, New Orleans, Louisiana. Association for Computational Linguistics.
Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In *Proceedings of the Sixth* Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics.
Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics.
Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In *Proceedings* of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver. Association for Computational Linguistics.
Tong Niu and Mohit Bansal. 2018. Adversarial oversensitivity and over-stability strategies for dialogue models. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 486–496, Brussels, Belgium. Association for Computational Linguistics.
NLLB Team, Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind:
Scaling human-centered machine translation. *arXiv* preprint arXiv:2207.04672.
Gözde Gül ¸Sahin and Mark Steedman. 2018. Data augmentation via dependency tree morphing for lowresource languages. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 5004–5009, Brussels, Belgium.
Association for Computational Linguistics.
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2021a. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1351–1361, Online. Association for Computational Linguistics.
Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan.
2021b. CCMatrix: Mining billions of high-quality parallel sentences on the web. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6490–6500, Online. Association for Computational Linguistics.
Fiona Anting Tan, Devamanyu Hazarika, See-Kiong Ng, Soujanya Poria, and Roger Zimmermann. 2021.
Causal augmentation for causal sentence classification. In *Proceedings of the First Workshop on Causal* Inference and NLP, pages 1–20, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zilu Tang, Muhammed Yusuf Kocyigit, and Derry Tanti Wijaya. 2022. AugCSE: Contrastive sentence embedding with diverse augmentations. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 375–398, Online only. Association for Computational Linguistics.
Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 6382–6388, Hong Kong, China. Association for Computational Linguistics.
Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp.
2018. Overview of the third bucc shared task: Spotting parallel sentences in comparable corpora. In Proceedings of 11th workshop on building and using comparable corpora, pages 39–42.
## A Data Statistics For Xsim++ **With** Flores200 **Devtest Set**
| Total # | # per orig. | |
|-----------|---------------|-------|
| Original | 1012 | - |
| Causality | 1916 | 1.89 |
| Entity | 38855 | 38.39 |
| Number | 3262 | 3.22 |
Table 5: Total numbers of original sentences and transformed sentences in different transformation categories.
We also report the averaged numbers of transformations per original sentence for each category.
We report the data statistics for xsim++ with FLORES200 devtest set in Table 5.
## B Sizes Of Monolingual Data For Low-Resource Languages
| Language | Size |
|------------|-------------|
| kik | 147,902 |
| kea | 226,507 |
| fur | 737,178 |
| fao | 1,179,475 |
| tpi | 1,661,743 |
| bem | 2,302,805 |
| ibo | 8,124,418 |
| zul | 20,477,331 |
| swh | 55,399,821 |
| ltz | 123,944,670 |
Table 6: Number of monolingual sentences for each language.
We report the sizes of monolingual data for each language in Table 6.
## C Hyperparameters For Nmt Systems
| encoder layers | 6 |
|-------------------------|-------------|
| encoder attention heads | 8 |
| encoder embed dim | 512 |
| encoder FFNN embed dim | 4096 |
| decoder layers | 6 |
| decoder attention heads | 8 |
| decoder embed dim | 512 |
| decoder FFNN embed dim | 4096 |
| optimiser | Adam |
| adam betas | (0.9, 0.98) |
| learning rate | 0.001 |
| dropout | 0.3 |
| spm vocab size | 7000 |
Table 7: Hyperparameters for NMT systems.
We report hyperparameters for NMT evaluations in Table 7.
## D Within And Across Model Accuracies
Table 8: Pairwise ranking accuracy for comparisons within a model and across different models. An ∗ indicates that the result passes the significance test proposed by Koehn (2004) with p-value < 0.05 when compared to xsim.
We report accuracies for within a model (i.e.,
LASER) and across different models (i.e., the 10 LASER checkpoints vs LaBSE) in Table 8.
| Metric | Within | Across |
|----------|----------|----------|
| xsim | 31.33 | 54.04 |
| xsim++ | 69.77∗ | 82.00∗ |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2
✓ B1. Did you cite the creators of artifacts you used?
Section 2
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3 and Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3 and Appendix
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
cai-etal-2023-graph | Graph Propagation based Data Augmentation for Named Entity Recognition | https://aclanthology.org/2023.acl-short.11 | Data augmentation is an effective solution to improve model performance and robustness for low-resource named entity recognition (NER). However, synthetic data often suffer from poor diversity, which leads to performance limitations. In this paper, we propose a novel Graph Propagated Data Augmentation (GPDA) framework for Named Entity Recognition (NER), leveraging graph propagation to build relationships between labeled data and unlabeled natural texts. By projecting the annotations from the labeled text to the unlabeled text, the unlabeled texts are partially labeled, which has more diversity rather than synthetic annotated data. To strengthen the propagation precision, a simple search engine built on Wikipedia is utilized to fetch related texts of labeled data and to propagate the entity labels to them in the light of the anchor links. Besides, we construct and perform experiments on a real-world low-resource dataset of the E-commerce domain, which will be publicly available to facilitate the low-resource NER research. Experimental results show that GPDA presents substantial improvements over previous data augmentation methods on multiple low-resource NER datasets. | # Improving Low-Resource Named Entity Recognition With Graph Propagated Data Augmentation
Jiong Cai⋄, Shen Huang†, Yong Jiang†∗, Zeqi Tan♠, Pengjun Xie†**, Kewei Tu**⋄ ∗
⋄School of Information Science and Technology, ShanghaiTech University Shanghai Engineering Research Center of Intelligent Vision and Imaging Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
♠College of Computer Science and Technology, Zhejiang University
†DAMO Academy, Alibaba Group
## Abstract
Data augmentation is an effective solution to improve model performance and robustness for low-resource named entity recognition (NER). However, synthetic data often suffer from poor diversity, which leads to performance limitations. In this paper, we propose a novel Graph Propagated Data Augmentation (GPDA) framework for Named Entity Recognition (NER),
leveraging graph propagation to build relationships between labeled data and unlabeled natural texts. By projecting the annotations from the labeled text to the unlabeled text, the unlabeled texts are partially labeled, which has more diversity rather than synthetic annotated data. To strengthen the propagation precision, a simple search engine built on Wikipedia is utilized to fetch related texts of labeled data and to propagate the entity labels to them in the light of the anchor links. Besides, we construct and perform experiments on a real-world lowresource dataset of the E-commerce domain, which will be publicly available to facilitate the low-resource NER research. Experimental results show that GPDA presents substantial improvements over previous data augmentation methods on multiple low-resource NER
datasets. 1
## 1 Introduction
Data augmentation is an effective solution to improve model performance and robustness, and is especially useful when the labeled data is scarce. In computer vision and speech, simple hand-crafted manipulations (Zhong et al., 2020; Zhang et al.,
2018) are widely used to generate synthetic data that preserve the original information. However,
∗ The email of the authors are: Jiong Cai ([email protected]), Shen Huang ([email protected]), Yong Jiang ([email protected]), Zeqi Tan ([email protected]), Penjun Xie ([email protected]) and Kewei Tu ([email protected]). Yong Jiang and Kewei Tu are the corresponding authors.
1Our code is publicly available at https://github.com/
modelscope/AdaSeq/tree/master/examples/GPDA.
when applied to natural language processing (NLP),
it is challenging to edit a sentence without changing its syntax or semantics.
There are two successful attempts of applying data augmentation on sentence-level NLP tasks.
One is manipulating a few words in the original sentence, which can be based on synonym replacement (Zhang et al., 2015; Kobayashi, 2018; Wu et al., 2019; Wei and Zou, 2019), random insertion or deletion (Wei and Zou, 2019), random swap (¸Sahin and Steedman, 2018; Wei and Zou, 2019; Min et al., 2020). The other is generating the whole sentence with the help of backtranslation (Yu et al., 2018; Dong et al., 2017; Iyyer et al., 2018), sequence to sequence models (Kurata et al., 2016; Hou et al., 2018) or pre-trained language models (Kumar et al., 2020). However, when applied to token-level tasks such as NER,
these methods suffer heavily from token-label misalignment or erroneous label propagation.
To overcome the issue of token-label misalignment, Dai and Adel (2020) extend the replacement from token-level to entity-level with entities of the same class, which proves to be a simple but strong augmentation method for NER. Li et al.
(2020) adopt a seq2seq model to conditionally generate contexts while leaving entities / aspect terms unchanged. Ding et al. (2020) exploit an auto-regressive language model to annotate entities while treating NER as a text tagging task. Zhou et al. (2022) utilize labeled sequence linearization to enable masked entity language model to explicitly condition on label information when predicting masked entity tokens. Still, these methods generate synthetic data, which inevitably introduces incoherence, semantic errors and lacking in diversity.
In this work, we investigate data augmentation with natural texts instead of synthetic ones. We are inspired by the fact that professional annotators usually understand the semantics of an entity through its rich context. However, in low-resource 110 NER, the semantic information of a specific entity is relatively limited due to fewer annotations. To this end, we propose to improve the NER models by mining richer contexts for the existing labeled entities. More particularly, we propose a Graph Propagation based Data Augmentation (GPDA)
framework for NER, leveraging graph propagation to build relationships between labeled data and unlabeled natural texts. The unlabeled texts are accurately and partially labeled according to their connected labeled data, which has more diversity rather than synthetic hand-crafted annotations.
Furthermore, not restricted to the existing annotated entities in the training data, we explore external entities from the unlabeled text by leveraging consistency-restricted self-training.
The contributions of GPDA can be concluded:
- We propose a data augmentation framework that utilizes graph propagation with natural texts for augmentation, which is rarely investigated in previous work (Section 2);
- We utilize a simple Wikipedia-based search engine to build the graph with two retrieval methods (Section 2.2);
- With consistency-restricted self-training, we further make the most efficient utilization of externally explored unlabeled text (Section 2.3);
- By conducting experiments on both public datasets and a real-world multilingual lowresource dataset, GPDA achieves substantial improvements over previous data augmentation methods (Section 3).
## 2 Method
Fig. 1 presents the workflow of our proposed data augmentation framework. First, we build a graph between labeled data nodes and unlabeled text nodes according to their textual similarity. Then, the entity annotations are propagated to obtain augmented data. Finally, the marginalized likelihood for conditional random field (CRF) (Tsuboi et al.,
2008) is applied during the training phase as the augmented data are partially labeled. Moreover, we adopt the consistency-restricted self-training strategy to further improve the model performance.
## 2.1 Ner With Pure Labeled Data
We take NER as a sequence labeling problem, which predicts a label sequence y =
![1_image_0.png](1_image_0.png)
{y1, · · · , yn|yi ∈ Y} at each position for the input tokens x = {x1, · · · , xn}, where Y denotes the label set. The sequence labeling model feeds the input x into a transformer-based encoder (such as BERT (Devlin et al., 2019)) which creates contextualized embeddings ri for each token. Then a linear-chain CRF layer that captures dependencies between neighboring labels is applied to predict the probability distribution:
$$\psi(y_{i-1},y_{i},r_{i})=\exp(\mathbf{W}_{y}^{T}r_{i}+\mathbf{b}_{y_{i-1}y_{i}})$$ $$P_{\theta}(\mathbf{y}|\mathbf{x})={\frac{\prod\limits_{i=1}^{n}\psi(y_{i-1},y_{i},r_{i})}{\sum\limits_{y^{\prime}\in{\mathcal{Y}}(x)}\prod\limits_{i=1}^{n}\psi(y_{i-1}^{\prime},y_{i}^{\prime},r_{i})}}$$
Unified Training Objective Instead of directly minimizing the negative log-likelihood, we unify the training objectives in Section 2.1, 2.2 and 2.3.
Specifically, we compute the marginal probability of each token Pθ(yi|x) with the forward-backward algorithm.
$$\begin{array}{c}{{\alpha(y_{i})=\sum_{\{y_{0},\ldots,y_{i-1}\}}\prod_{k=1}^{i}\psi(y_{k-1},y_{k},r_{k})}}\\ {{\beta(y_{i})=\sum_{\{y_{i+1},\ldots,y_{n}\}}\prod_{k=i+1}^{n}\psi(y_{k-1},y_{k},r_{k})}}\\ {{P_{\theta}(y_{i}|\mathbf{x})\propto\alpha(y_{i})\times\beta(y_{i})}}\end{array}$$
The marginal distributions can be computed efficiently. Given a partially annotated label sequence y∗ = {∗, . . . , yi*, . . . ,* ∗} that ∗ denotes the label that is not observed, we can obtain the probability.
$$Q_{\theta}(y^{*}|x)=\prod_{i=1}^{n}Q_{\theta}(y_{i}|x)$$
| Method | AI | Literature | Music | Politics | Science | Average |
|-------------------------------------------------------|-------|--------------|---------|------------|-----------|-----------|
| State-of-the-art Approaches Zheng et al. (2022) | 63.28 | 70.76 | 76.83 | 73.25 | 70.07 | 70.84 |
| Hu et al. (2022) | 65.79 | 71.11 | 78.78 | 74.06 | 71.83 | 72.31 |
| Tang et al. (2022) | 66.03 | 68.59 | 73.1 | 71.69 | 75.52 | 70.99 |
| Baseline w/o Data Augmentation BERT-CRF | 65.06 | 71.39 | 78.18 | 74.46 | 73.95 | 72.61 |
| Data Augmentation Approaches DAGA (Ding et al., 2020) | 66.77 | 71.15 | 78.48 | 73.30 | 73.07 | 72.55 |
| NERDA (Dai and Adel, 2020) | 70.20 | 71.28 | 79.56 | 75.30 | 74.37 | 74.14 |
| GPDA (sparse retrieval w/o EEA) | 67.14 | 72.20 | 79.55 | 74.96 | 74.69 | 73.71 |
| GPDA (dense retrieval w/o EEA) | 67.76 | 72.11 | 77.54 | 74.86 | 73.07 | 73.07 |
| GPDA (sparse retrieval w/ EEA) | 70.05 | 72.34† | 80.16† | 75.95† | 75.55† | 74.81† |
Table 1: Comparisons of different studies and our proposed GPDA on the CrossNER dataset. † means the result is significantly better than the compared baseline methods (with Student's t-test with p < 0.05).
where Qθ(yi|x) is defined as Pθ(yi|x) if yiis observed, otherwise Qθ(yi|x) = 1.
The final model parameters can be optimized by minimizing the following objective:
$${\mathcal{L}}(\theta)=-\log Q_{\theta}(y^{*}|x)$$
For the pure labeled data D = {(x
(i), y
(i))}
N
i=1, we direct set y∗ = yi and obtain the loss function.
$${\mathcal{L}}(\theta)=-\sum_{(\mathbf{x}^{(i)},\mathbf{y}^{(i)})\in D}\log Q_{\theta}(\mathbf{y}^{*}=\mathbf{y}^{(i)}|\mathbf{x}^{(i)})$$
## 2.2 Ner With Propagated Unlabeled Data
Building Propagating Graph Compared to labeled data, large-scale unlabeled natural texts can be acquired much more easily. We attempt to utilize these natural texts for augmentation by building a graph between the labeled data nodes and the unlabeled text nodes according to their textual similarity. Given a labeled sample (x
(i), y
(i)),
we retrieve its corresponding augmented sentences
{x′(i,j)}
m j=1 via a search engine. For common NER
datasets, the search engine is built on the Wikipedia corpus with one of the two methods we explore:
sparse retrieval based on BM252 or dense retrieval3 based on L2 similarity. The top related sentences will be treated connected to the original labeled sentence in the graph.
2Sparse retrieval is implemented with Elastic Search 3Dense retrieval is implemented with ColBERT
Label Propagation While building the graph, label propagation is conducted from labeled data
(x
(i), y
(i)) to unlabeled data {x′(i,j)}
m j=1 to generate partially annotated {(x′(i,j), y′(i,j))}
m j=1. To strengthen the precision, propagation will not happen unless the anchor text in Wikipedia matches the labeled entity. By graph propagation, we obtain the augmented data D′ = {(x′(j), y′(j))}M
j=1 sharing the same entities but with more diverse contexts.
Along with the original labeled data D, we train the NER model following the same objective in Section 2.1:
$${\mathcal{L}}(\theta)=-\sum_{(\mathbf{x}^{(i)},\mathbf{y}^{(i)})\in D\cup D^{\prime}}\log Q_{\theta}(\mathbf{y}^{*}=\mathbf{y}^{(i)}|\mathbf{x}^{(i)})$$
## 2.3 Ner With Explored Entity Annotations
To make the most efficient utilization of the explored annotations in D′, we adopt consistencyrestricted self-training. A well-trained model from Section 2.2 will be utilized to re-annotate the partially labeled augmented data under consistency restriction. Particularly, an augmented sample
(x′(j), y′(j)) will be re-annotated to (x′(j), yˆ
(j)).
Now we have Dˆ = {(x′(j), yˆ
(j))}M
j=1. Along with the original labeled data D, we train a better NER
model following the objective in Section 2.1:
$${\mathcal{L}}(\theta)=-\sum_{(\mathbf{x}^{(i)},\mathbf{y}^{(i)})\in D\cup{\hat{D}}}\log Q_{\theta}(\mathbf{y}^{*}=\mathbf{y}^{(i)}|\mathbf{x}^{(i)})$$
$$\begin{array}{l}{{\mathrm{ith~Elastic~Search}}}\\ {{\mathrm{th~ColBERT}}}\end{array}$$
$$112$$
## 3 Experiments 3.1 Dataset
We conduct experiments on the CrossNER (Liu et al., 2020) dataset of 5 genres (AI, Literature, Music, Politics, Science) and an anonymous multi-lingual E-commerce query NER dataset
(Ecom) consisting of 3 languages (English, Spanish, French) Detailed statistics about these two datasets is provided in the Table 2.
For CrossNER, the search engine is manually built on the Wikipedia corpus. While for Ecom, an off-the-shelf E-commerce search engine is utilized to build the augmentation graph.
## 3.2 Results And Analysis
Low-resource NER Tasks As illustrated in Table 1, the proposed GPDA consistently achieves the best F1 scores across the five genres of CrossNER
and gains an average improvement of 2.2% over the baseline BERT-CRF model. It also outperforms other data augmentation methods, demonstrating its effectiveness on multi-domain low-resource NER.
Furthermore, GPDA with Explored Entity Annotation (EEA) strategy achieves 1.1% higher F1 than GPDA without EEA, suggesting that it is also crucial to extend unique entities rather then only diversifying entity contexts in data augmentation.
It can be noticed that GPDA with dense retrieval performs worse than with sparse retrieval, which is not intuitive. This may be attributed to dense retrieval requires careful supervised training in the target domain, but our pre-trained matching model is not finetuned. We will leave this part for future work.
Real-world Low-resource NER Scenarios Table 3 shows the F1 results on three languages from the real-world Ecom dataset. The augmented data generated by GPDA improves model performances for multilingual NER. For specific domain datasets where high-quality knowledge or texts can be fetched easily, GPDA are indeed helpful.
Size of Gold Samples We study the impact of GDPA on different size of gold samples in Fig. 2.
On the low-resource settings where 10%-25% gold samples are available, the improvement is striking which outperforms the baseline model by at most 37%.
![3_image_0.png](3_image_0.png)
Case Study Taking a closer look at the augmented cases in Fig. 3, we notice that GPDA
generates different contexts concerning the entity
"Adobe Creative Suite". The augmented data generated by GPDA introduces more diversity to help reduce overfitting. Different from synthetic data, these generated data are all from natural texts so that there is no need to worry about the coherence in syntax or semantics.
## 4 Discussion
Retrieving relevant texts from databases has been widely used in NLP tasks. RaNER (Wang et al.,
2021) retrieves context using a search system to enhance the token representation for NER tasks.
To help entity disambiguation in domain-specific NER, Zhang et al. (2022) retrieves the domainspecific database to find the correlated sample. In order to leverage the extensive information about entities in Wikipedia and Wikidata, Wang et al.
(2022) and Tan et al. (2023) construct databases and retrieve context to enhance model performance.
In this study, we propose the utilization of retrieval techniques for data augmentation in low-resource settings. Furthermore, while they perform retrieval on both the training and testing datasets, we only use the small seed training dataset for retrieval. It's noteworthy that our approach can also be combined with theirs to further enhance the performance of NER in low-resource settings.
## 5 Conclusion
We present GPDA as a data augmentation framework for low-resource NER, which utilizes graph propagation with natural texts for augmentation.
To make the most efficient utilization of the explored partially labeled text, we adopt consistencyrestricted self-training. Experiment results show
| #Train / #Dev / #Test | #DAGA / Avg Ent | #NERDA / Avg Ent | #GPDA / Avg Ent | #GPDA+EEA / Avg Ent | | |
|-------------------------|-------------------|--------------------|-------------------|-----------------------|--------------|-----|
| Ai | 100/350/431 | 866 / 1.60 | 6000 / 5.32 | 447 / 3.41 | 2428 / 7.79 | |
| Lit | 100/400/416 | 2814 / 2.21 | 6000 / 5.41 | 297 / 2.00 | 3967 / 7.71 | |
| Mus | 100/380/465 | 1102 / 1.93 | 6000 / 6.49 | 307 / 2.35 | 4273 / 10.40 | |
| Pol | 200/541/651 | 5274 / 2.70 | 12000 / 6.52 | 718 / 2.36 | 8463 / 8.95 | |
| Sci | 200/450/543 | 3896 / 2.79 | 12000 / 5.38 | 552 / 3.97 | 7494 / 9.53 | |
| Ecom | en | 1000/1000/1000 | 4740 / 1.10 | 32000 / 0.96 | 30000 / 1.50 | N/A |
| es | 1000/1000/1000 | 20362 / 1.07 | 32000 / 1.09 | 30000 / 1.41 | N/A | |
| fr | 1000/1000/1000 | 17340 / 1.08 | 32000 / 1.14 | 30000 / 1.24 | N/A | |
Table 2: The statics of the dataset used and generated in our experiments.
## Gold Training Data
… with the 2016 introduction of the voice editing and generation software [PRODUCT Adobe Voco] , a prototype slated to be a part of the [PRODUCT Adobe Creative Suite] and [ORGANISATION DeepMind] [PRODUCT WaveNet] , …
## Augmented Data
1) Adobe Voco is an unreleased … prototype software by [ORG Adobe] that enables novel editing and generation of audio . Dubbed
"[PRO Photoshop] -for-voice" , it was first previewed at the [PRO Adobe MAX] event in November 2016.
2) With the 2016 introduction of Adobe Voco audio editing and generating software prototype slated to be part of the [PRO Adobe Creative Suite] and the similarly enabled DeepMind [PRO WaveNet], a [ALG deep neural network] based audio synthesis software …
3) Adobe Device Central is a software program created and released by [ORG Adobe Systems ] as a part of the [PRO Adobe Creative Suite] 3 ( CS3 ) in March 2007 .
4) [PRO Adobe Creative Suite], a design and development software suite by Adobe Systems.
Figure 3: An illustration of diversity of augmented data. The pink annotations are propagated via anchor matching while the yellow ones are labeled with EEA
Method en es fr Avg
Baseline 76.54 85.50 72.78 78.27 DAGA 77.11 86.51 81.32 81.65
NERDA 77.10 87.05 81.64 81.93
GPDA 77.83 87.23 82.48 **82.51**
Table 3: Results on the Ecom dataset.
that our proposed GPDA achieves substantial improvements over previous data augmentation methods on multiple low-resource NER datasets.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China (61976139) and by Alibaba Group through Alibaba Innovative Research Program.
## 6 Limitations
There are some limitations in the use of GPDA.
- The label propagation procedure requires anchor matching in the light of annotation precision, which limits the unlabeled data source.
However, Wikipedia is a open-domain easyto-fetch corpus with anchor links, which can somehow mitigate the issue.
- Augmented Data generated by GPDA provide more diversity. But for some datasets, simple modifications (NERDA) on the original words performs better. We are investigating a hybrid approach to apply GPDA and NERDA in the same framework.
## References
Xiang Dai and Heike Adel. 2020. An analysis of simple data augmentation for named entity recognition. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3861–3867, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, and Chunyan Miao. 2020. DAGA: Data augmentation with a generation approach for low-resource tagging tasks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 6045–6057, Online. Association for Computational Linguistics.
Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing, pages 875–886, Copenhagen, Denmark. Association for Computational Linguistics.
Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu.
2018. Sequence-to-sequence data augmentation for dialogue language understanding. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1234–1245, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Jinpeng Hu, He Zhao, Dan Guo, Xiang Wan, and TsungHui Chang. 2022. A label-aware autoregressive framework for cross-domain NER. In Findings of the Association for Computational Linguistics: NAACL
2022, pages 2222–2232, Seattle, United States. Association for Computational Linguistics.
Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885, New Orleans, Louisiana. Association for Computational Linguistics.
Sosuke Kobayashi. 2018. Contextual augmentation:
Data augmentation by words with paradigmatic relations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452–457, New Orleans, Louisiana. Association for Computational Linguistics.
Varun Kumar, Ashutosh Choudhary, and Eunah Cho.
2020. Data augmentation using pre-trained transformer models. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems, pages 18–26, Suzhou, China. Association for Computational Linguistics.
Gakuto Kurata, Bing Xiang, and Bowen Zhou. 2016.
Labeled data generation with encoder-decoder lstm for semantic slot filling. In *Interspeech*.
Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling, and Yan Song. 2020. Conditional augmentation for aspect term extraction via masked sequence-tosequence generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7056–7066, Online. Association for Computational Linguistics.
Zihan Liu, Yan Xu, Tiezheng Yu, Wenliang Dai, Ziwei Ji, Samuel Cahyawijaya, Andrea Madotto, and Pascale Fung. 2020. Crossner: Evaluating cross-domain named entity recognition.
Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. 2020. Syntactic data augmentation increases robustness to inference heuristics. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 2339–2352, Online. Association for Computational Linguistics.
Gözde Gül ¸Sahin and Mark Steedman. 2018. Data augmentation via dependency tree morphing for lowresource languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5004–5009, Brussels, Belgium.
Association for Computational Linguistics.
Zeqi Tan, Shen Huang, Zixia Jia, Jiong Cai, Yinghui Li, Weiming Lu, Yueting Zhuang, Kewei Tu, Pengjun Xie, Fei Huang, and Yong Jiang. 2023. Damo-nlp at semeval-2023 task 2: A unified retrieval-augmented system for multilingual named entity recognition.
Minghao Tang, Peng Zhang, Yongquan He, Yongxiu Xu, Chengpeng Chao, and Hongbo Xu. 2022. DoSEA:
A domain-specific entity-aware framework for crossdomain named entity recogition. In *Proceedings of* the 29th International Conference on Computational Linguistics, pages 2147–2156, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Yuta Tsuboi, Hisashi Kashima, Shinsuke Mori, Hiroki Oda, and Yuji Matsumoto. 2008. Training conditional random fields using incomplete annotations.
In *Proceedings of the 22nd International Conference* on Computational Linguistics (Coling 2008), pages 897–904, Manchester, UK. Coling 2008 Organizing Committee.
Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, and Kewei Tu. 2021.
Improving named entity recognition by external context retrieving and cooperative learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1800–1812, Online.
Association for Computational Linguistics.
Xinyu Wang, Yongliang Shen, Jiong Cai, Tao Wang, Xiaobin Wang, Pengjun Xie, Fei Huang, Weiming Lu, Yueting Zhuang, Kewei Tu, Wei Lu, and Yong Jiang.
2022. DAMO-NLP at SemEval-2022 task 11: A
knowledge-based system for multilingual named entity recognition. In *Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval2022)*, pages 1457–1468, Seattle, United States. Association for Computational Linguistics.
Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 6382–6388, Hong Kong, China. Association for Computational Linguistics.
Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional bert contextual augmentation. In *International Conference on Computational Science*, pages 84–95. Springer.
Adams Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension.
ICLR.
Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In *International Conference on* Learning Representations.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc.
Xin Zhang, Yong Jiang, Xiaobin Wang, Xuming Hu, Yueheng Sun, Pengjun Xie, and Meishan Zhang.
2022. Domain-specific NER via retrieving correlated samples. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 2398–2404, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Junhao Zheng, Haibin Chen, and Qianli Ma. 2022.
Cross-domain named entity recognition via graph matching. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2670–2680, Dublin, Ireland. Association for Computational Linguistics.
Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. 2020. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI).
Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, and Chunyan Miao. 2022. MELM: Data augmentation with masked entity language modeling for low-resource NER. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2251–2262, Dublin, Ireland. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5
✓ A2. Did you discuss any potential risks of your work?
Section 5
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 3
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Due to the limitation of page, we didn't report these.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Due to the limitation of page, we didn't report these.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
maekawa-etal-2023-dataset | Dataset Distillation with Attention Labels for Fine-tuning {BERT} | https://aclanthology.org/2023.acl-short.12 | Dataset distillation aims to create a small dataset of informative synthetic samples to rapidly train neural networks that retain the performance of the original dataset. In this paper, we focus on constructing distilled few-shot datasets for natural language processing (NLP) tasks to fine-tune pre-trained transformers. Specifically, we propose to introduce attention labels, which can efficiently distill the knowledge from the original dataset and transfer it to the transformer models via attention probabilities. We evaluated our dataset distillation methods in four various NLP tasks and demonstrated that it is possible to create distilled few-shot datasets with the attention labels, yielding impressive performances for fine-tuning BERT. Specifically, in AGNews, a four-class news classification task, our distilled few-shot dataset achieved up to 93.2{\%} accuracy, which is 98.5{\%} performance of the original dataset even with only one sample per class and only one gradient step. | # Dataset Distillation With Attention Labels For Fine-Tuning Bert
Aru Maekawa, Naoki Kobayashi, Kotaro Funakoshi, and Manabu Okumura Tokyo Institute of Technology
{maekawa, kobayasi, funakoshi, oku}@lr.pi.titech.ac.jp
## Abstract
Dataset distillation aims to create a small dataset of informative synthetic samples to rapidly train neural networks that retain the performance of the original dataset. In this paper, we focus on constructing distilled fewshot datasets for natural language processing
(NLP) tasks to fine-tune pre-trained transformers. Specifically, we propose to introduce attention labels, which can efficiently distill the knowledge from the original dataset and transfer it to the transformer models via attention probabilities. We evaluated our dataset distillation methods in four various NLP tasks and demonstrated that it is possible to create distilled few-shot datasets with the attention labels, yielding impressive performances for finetuning BERT. Specifically, in AGNews, a fourclass news classification task, our distilled fewshot dataset achieved up to 93.2% accuracy, which is 98.5% performance of the original dataset even with only one sample per class and only one gradient step.
## 1 Introduction
Deep learning models have achieved state-ofthe-art performance in various fields, including computer vision and natural language processing
(NLP), using large-scale neural networks trained with huge datasets. Unfortunately, their successful performances have come with massive training costs, including training time, GPU resources, and energy consumption. To reduce the training costs, current research has been focusing on constructing a small training dataset such that models trained with it can achieve comparable performances to models trained with the whole original dataset.
One classical way to compress the training dataset is data selection. Data selection methods choose a subset of effective training samples on the basis of a number of heuristic measures, for example, cluster centers (Sener and Savarese, 2018),
diversity (Aljundi et al., 2019), and likelihood of models (Moore and Lewis, 2010). Although the data selection methods effectively work for efficient model training and several applications, such as active learning (Sener and Savarese, 2018) and continual learning (Aljundi et al., 2019), their performance is clearly restricted because they rely on the existence of representative samples that are effective for model training in the original dataset.
As an alternative approach for reducing the training dataset, Wang et al. (2018b) proposed *dataset* distillation, which aims to create a small number of synthetic samples optimized to effectively train models. Dataset distillation has attracted much attention in machine learning (Wang et al., 2018b; Zhao et al., 2021; Zhao and Bilen, 2021; Sucholutsky and Schonlau, 2021; Bohdal et al., 2020; Wang et al., 2022; Cazenavette et al., 2022) for both the theoretical interest and various applications, such as neural architecture/hyper-parameter search
(Such et al., 2020), continual learning (Masarczyk and Tautkute, 2020; Rosasco et al., 2022), federated learning (Goetz and Tewari, 2020; Zhou et al.,
2020), and preserving data privacy (Li et al., 2020; Dong et al., 2022).
However, most of the existing research on dataset distillation mainly focuses on image datasets, and only a few studies involve NLP tasks.
Sucholutsky and Schonlau (2021) and Li and Li
(2021) extended dataset distillation to text datasets by using embedding vectors as an input of the distilled dataset instead of discrete text. While these studies applied dataset distillation to those model architectures based on convolutional neural networks (CNNs) and recurrent neural networks
(RNNs), we cannot find any research that tackles dataset distillation for pre-trained transformers, such as BERT (Devlin et al., 2019), which have become the de-facto standard for various kinds of NLP tasks. Therefore, in this paper, we aim to obtain distilled few-shot datasets to fine-tune the pre-trained transformers for NLP tasks.
119 To this end, we focus on the attention mechanism, which is the core component of transformers (Vaswani et al., 2017). Several current studies utilized supervision of the attention probabilities to effectively train the model (Liu et al., 2016; Mi et al., 2016). Moreover, it is also used for the model distillation to efficiently transfer the knowledge of a transformer model to another one via attention probabilities (Aguilar et al., 2020; Jiao et al., 2020; Sun et al., 2020; Wang et al., 2020, 2021). Inspired by this, we propose distilled attention labels, which are the supervision of attention probabilities optimized as a part of the distilled dataset, to enhance the effectiveness of the distilled dataset for training the transformer models.
In our experiments, we constructed distilled fewshot datasets to fine-tune BERT (Devlin et al.,
2019) in various types of NLP tasks: AGNews (text classification), SST-2 (sentiment analysis), QNLI
(QA/NLI), and MRPC (paraphrase identification).
Our main contributions are as follows: (i) To the best of our knowledge, this is the first work to explore dataset distillation for pre-trained transformers. Specifically, we demonstrate that our distilled datasets effectively fine-tune BERT even with only one sample for each class and only one gradient step. (ii) We present the distilled attention labels, which can easily be applied to dataset distillation for transformer architectures. Experimental results show that they consistently improved the performance with the distilled datasets in various types of NLP tasks. (iii) We open our source code and the distilled datasets obtained through our experiments to facilitate further research.1
## 2 Methodology 2.1 Dataset Distillation
In this section, we explain the basic approach of dataset distillation (Wang et al., 2018b), which aims to optimize a synthetic dataset through the gradient method similar to the current meta-learning approach (Finn et al., 2017).
Let the original training dataset D =
{(xi, yi)}
N
i=1, where (xi, yi) is a pair of an input and its class label. Our goal is to optimize a distilled dataset D˜ = {(˜xi, y˜i)}
M
i=1, which is randomly initialized at first, with M ≪ N.
The model parameters θ are updated with a minibatch of the distilled dataset (x˜t, y˜t) by gradient descent (GD) steps as follows:
$$\begin{array}{c c}{{\theta_{t+1}=\theta_{t}-\tilde{\eta}\nabla_{\theta_{t}}{\mathcal{L}}_{t a s k}}}&{{}}\\ {{\mathrm{s.t.}}}&{{{\mathcal{L}}_{t a s k}=L(\tilde{\mathbf{x}}_{t},\tilde{\mathbf{y}}_{t},\theta_{t}),}}\end{array}\qquad(1)$$
where L() is a twice-differentiable loss function and η˜ is the learnable learning rate of the model, which is optimized together with D˜. Given initial model parameters θ0, we can represent the model trained with the distilled dataset D˜, with the number of GD steps T, as
$$\theta_{T}=F(\theta_{0};\tilde{D},\tilde{\eta},T),$$
$$(2)$$
θT = F(θ0; D˜, *η, T* ˜ ), (2)
where F() is the training procedure of the T steps for the GD updating (Eq. 1).
As the goal of dataset distillation is that θT performs well on the original dataset, the optimization objective of the distilled dataset D˜ is calculated as follows:
$$\mathcal{L}_{distall}(\tilde{\mathcal{D}},\tilde{\eta};\theta_{0}):=L(\mathbf{x}_{t},\mathbf{y}_{t},\theta_{T})\tag{2}$$ $$=L(\mathbf{x}_{t},\mathbf{y}_{t},F(\theta_{0};\tilde{\mathcal{D}},\tilde{\eta},T)),$$
(4)
where (xt, yt) is a mini-batch of the original training dataset.
Therefore, the optimization problem for dataset distillation is formulated as
$$\tilde{D}^{*},\tilde{\eta}^{*}=\arg\min\mathbb{E}_{\theta_{0}\sim p(\theta_{0})}\left[\mathcal{L}_{disttill}(\tilde{D},\tilde{\eta};\theta_{0})\right],\tag{5}$$
$${\mathfrak{I}}{\mathfrak{J}}\Big{|}\ ,$$
$$({\boldsymbol{5}})$$
where p(θ0) is the distribution of θ0.
We optimize the distilled dataset D˜ with this objective by using current gradient-based optimization techniques, e.g., Adam (Kingma and Ba, 2015).
However, the discrete nature of text data makes it difficult to apply the gradient methods directly. Inspired by previous work (Sucholutsky and Schonlau, 2021; Li and Li, 2021), we use a sequence of embedding vectors for inputs of the distilled dataset instead of text as it is. Using the embeddings makes the loss L*distill* differentiable with respect to D˜, and we can thus optimize the distilled dataset D˜ by the gradient methods.
## 2.2 Distilled Soft Labels
The class labels of the original dataset are usually discrete hard labels (i.e., one-hot labels representing only a single class). Instead of hard labels, we can use soft labels for distilled datasets and optimize them with the input embeddings. Using soft 1https://github.com/arumaekawa/
dataset-distillation-with-attention-labels labels enables the distilled datasets to contain more information. Following previous work (Sucholutsky and Schonlau, 2021; Bohdal et al., 2020), we first initialize the soft labels with one-hot values and enable them to take any real values. We can now optimize the soft labels through the gradient method as well as the input embeddings.
## 2.3 Distilled Attention Labels
For efficient knowledge transfer to transformer models via training with the distilled dataset, we propose attention labels, which are optimized to guide the multi-head attention module of the transformer models.
Inspired by previous work (Aguilar et al., 2020; Wang et al., 2020, 2021), we compute the KullbackLeibler (KL) divergence DKL between the selfattention probabilities of the model a(θ) and the distilled attention labels a˜ across all layers and heads. The attention loss L*attn* is computed as follows:
$${\mathcal{L}}_{a t t n}={\frac{1}{K}}\sum_{k=1}^{K}{\frac{1}{H}}\sum_{h=1}^{H}D_{\mathrm{KL}}({\bar{a}}_{k,h}||a_{k,h}(\theta)),\,\,\,\,(6)$$
where a˜k,h and ak,h(θ) are the attention maps for the h-th head of the k-th layer of the distilled attention labels and the model, respectively, K is the number of layers, and H is the number of heads.
Due to the data size, we consider the attention probabilities only for the first input token ([CLS]).
We train the model to minimize L*task* and L*attn* at the same time. Thus, the GD updating of the model (Eq. 1) is modified as
$$\theta_{t+1}=\theta_{t}-\tilde{\eta}\nabla_{\theta_{t}}({\mathcal{L}}_{t a s k}+\lambda{\mathcal{L}}_{a t t n}),$$
where λ is the balance weight for L*attn*.
The attention labels a˜ are first initialized randomly and restricted to being a valid probability distribution (i.e., non-negative and the sum equals 1) by applying the softmax function to real-valued vectors. We optimize the attention labels together with the input embeddings and the soft labels by the gradient method. The details of the step-by-step procedure of our distillation algorithm are shown in Appendix A.
## 3 Experiments 3.1 Settings
Datasets. We evaluated our dataset distillation methods in various types of NLP tasks. We used
| Dataset | Task | Metric | C | # Train | # Test (Dev.) |
|-----------|---------------------|----------|-----|-----------|-----------------|
| AGNews | news classification | acc. | 4 | 120k | 7.6k |
| SST-2 | sentiment | acc. | 2 | 67k | 872 |
| QNLI | QA/NLI | acc. | 2 | 105k | 5.5k |
| MRPC | paraphrase | acc./F1 | 2 | 3.7k | 408 |
a text classification task (AGNews (Zhang et al.,
2015)) and three different natural language understanding tasks (SST-2, QNLI, and MRPC) from the GLUE benchmark (Wang et al., 2018a). For the evaluation metrics, we used accuracy for AGNews.
For the other three tasks, we followed the evaluation settings of GLUE (Wang et al., 2018a). The statistics of each benchmark dataset are summarized in Table 1.
Network Architecture. To evaluate the dataset distillation methods, we constructed distilled fewshot datasets to fine-tune BERT (Devlin et al.,
2019), which is the first pre-trained transformer model, that all subsequent models are based on.
We utilized the pre-trained BERTBASE model. Following the fine-tuning procedure in Devlin et al.
(2019), we introduced additional classification layer weights W ∈ R
C×D on the last hidden state of the [CLS] token, where D is the hidden dimension of BERT and C is the number of classes.
Implementation. For all our distilled datasets, we used Adam optimizer (Kingma and Ba, 2015) with a learning rate α ∈ {1e−3, 1e−2, 1e−1} and trained the distilled datasets for 30 epochs. We initialized the learnable learning rate η˜ ∈ {1e−2, 1e−1}. For the attention labels, we set λ = 1.0, which performed well in our preliminary experiments. We report the results for the best performing combination of α and η˜. Note that due to the coarse granularity of the search, there is no need to care about overfitting to the test set. More details of our implementation are shown in Appendix B.
Evaluation. To evaluate the distilled datasets, we fine-tuned the BERT model with them for 100 times, where the additional parameters W were randomly initialized each time. In all our experiments, we report the mean and standard deviation over the 100 evaluation results.
## 3.2 Results For 1-Shot And 1-Step Setting
We first evaluated the dataset distillation methods with a 1-shot and 1-step setting, where the distilled
| AGNews | SST-2 | QNLI | MRPC | # step | # shot | AGNews | SST-2 | QNLI | MRPC |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------|----------|----------|----------------------------------|----------|----------|----------|--------|
| Majority | 25.0 | 50.9 | 50.5 | 74.8 | Single-step setting 1 1 93.0±0.1 | 89.0±0.2 | 86.4±0.1 | 78.8±0.7 | |
| 1 | 3 | 93.5±0.1 | 90.3±0.2 | 86.7±0.1 | 79.3±0.5 | | | | |
| 1 | 5 | 93.1±0.1 | 90.1±0.2 | 86.9±0.1 | 79.4±0.5 | | | | |
| Same distilled data for each step 3 1 93.0±0.1 89.8±0.4 | 84.2±0.4 | 74.8±0.0 | | | | | | | |
| 5 | 1 | 92.1±0.1 | 85.8±0.4 | 85.9±0.1 | 74.8±0.0 | | | | |
| Different distilled data for each step 3 3 92.5±0.1 90.4±0.2 | 87.0±0.1 | 80.3±0.8 | | | | | | | |
| 5 | 5 | 93.1±0.1 | 90.7±0.2 | 86.1±0.1 | 76.5±0.8 | | | | |
| HL | 87.4±1.8 | 81.6±2.4 | 68.6±2.5 | 74.8±0.0 | | | | | |
| SL | 88.4±0.9 | 82.5±1.6 | 76.4±0.8 | 74.8±0.0 | | | | | |
| HL + AL | 93.2±0.1 | 90.1±0.3 | 85.9±0.1 | 76.4±0.8 | | | | | |
| SL + AL | 93.0±0.1 | 89.0±0.2 | 86.4±0.1 | 78.8±0.7 | | | | | |
| Full dataset | 94.6 | 92.7* | 91.8* | 88.6* | | | | | |
| Table 2: Experimental results for the 1-shot and 1-step setting. 'HL' and 'SL' mean hard and soft class labels, respectively, and 'AL' means attention labels. 'Majority' | | | | | | | | | |
dataset includes only one sample per class, and BERT was fine-tuned with it by only one GD step.
We compared the performance for hard/soft labels and with/without attention labels for each task.
Table 2 shows the evaluation results. The distilled datasets with the hard labels, i.e., only optimizing the input embeddings and not applying the attention labels, still achieved 87.4, 81.6, and 68.6 for AGNews, SST-2, and QNLI, respectively, which is 92.4, 88.0, and 74.7% performance of the full dataset. Furthermore, using the soft labels further improved these performances, especially by almost 8 points for QNLI. However, for MRPC,
the distilled dataset achieved only the same performance as the majority class baseline regardless of the use of the soft labels.
When applying the attention labels, the performance of the distilled dataset was significantly improved for all tasks, and their effect is much greater than the soft labels. Specifically, our distilled dataset with the attention labels yielded up to 98.5, 97.2, 94.1, and 88.9% performance of the full dataset for AGNews, SST-2, QNLI, and MRPC,
respectively. These results indicate that using the attention labels enables to extract the information from the original dataset as the attention probabilities and to efficiently transfer it to the model.
When comparing the performance between the four tasks, dataset distillation performed very well on relatively simple classification tasks such as AGNews and SST-2, while the performance was somewhat limited on QNLI and MRPC, which require understanding the relationship between two sentences. In particular, for MRPC, although the performance was improved by applying the attention labels, the gap from the full dataset was still larger than that in the other three tasks. The class imbalance in the original training dataset (68% positive) may make the training of the distilled dataset more difficult. We can say there is still room for performance improvement by dealing with this issue (e.g., by upsampling or downsampling).
## 3.3 Results For Multiple-Shot And Multiple-Step Setting
We also evaluated the distilled datasets with more than one shot and more than one GD step to finetune BERT. For the multiple-step setting, we considered two different scenarios: using the same distilled data in all steps and using different distilled data for each step. In these experiments, we evaluated the distilled datasets that use soft labels and attention labels for different numbers of GD
steps T ∈ {1, 3, 5}.
Table 3 shows the results for the multiple-shot and multiple-step setting. In the single-step setting, overall performance improved with the number of shots of the distilled data. We believe that this is simply due to the expressiveness of the distilled data improved with the size of them. When using the same distilled data for all steps in the multiple-step setting, the performance of the distilled datasets degraded even compared with that in the single-step setting. In contrast, the performance was improved by separating the distilled data for each step and slightly but better than that with the same number of shots in the single-step setting.
These results suggest that the role of the distilled data is different between the earlier and later steps, and it is difficult to obtain the distilled data that are generally useful for all GD steps.
In addition, the basic dataset distillation algorithm we used requires computing the back propagation through all GD steps for the optimization of the distilled dataset, which increases memory and computational costs linearly with T. Therefore, it was difficult to increase T to be larger than 5 in our experiments. This is the limitation of our dataset distillation method, and it needs further improvement to scale to more complex tasks or to train models from scratch.
## 4 Conclusion
In this paper, we explored dataset distillation in NLP tasks to fine-tune pre-trained transformers.
We proposed attention labels, which are the supervision of attention probabilities distilled as a part of the distilled datasets. Experimental results across various tasks demonstrate that our distilled fewshot datasets achieved successful performances even with only one sample per class. Notably, the attention labels significantly improved the performance of the distilled datasets even for the tasks where dataset distillation is difficult without them.
## Limitations
We think the following three points are the limitations of this work. (i) As mentioned in Section 3.3, the computational cost of our distillation approach increases linearly with the number of GD steps and the distilled data size. It is necessary to explore efficient distillation algorithms to scale our method to more complex tasks or full-scratch training in future work. (ii) To optimize the distilled dataset through the gradient method, we utilized word embedding vectors instead of directly optimizing the text as in the existing work. Therefore, the distilled dataset we obtained cannot be applied to models with different word embeddings, such as other pretrained models or full-scratch training. (iii) In our experiments, we evaluated our approach only on text classification tasks. However, our approach can also be applied to text generation tasks as well by applying the attention labels to all input tokens
(not only [CLS]) and using vocabulary-wise soft labels. In future work, we should investigate its performance and explore more effective approaches.
## References
Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. 2020. Knowledge distillation from internal representations. *Proceedings* of the AAAI Conference on Artificial Intelligence, 34(05):7350–7357.
Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. 2019. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
Ondrej Bohdal, Yongxin Yang, and Timothy M.
Hospedales. 2020. Flexible dataset distillation: Learn labels instead of images. *CoRR*,
abs/2006.08572.
George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, and Jun-Yan Zhu. 2022. Dataset distillation by matching training trajectories. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2022, New Orleans, LA, USA, June 19-20, 2022, pages 4749–4758. IEEE.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Tian Dong, Bo Zhao, and Lingjuan Lyu. 2022. Privacy for free: How does dataset condensation help privacy? In *Proceedings of the 39th International* Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5378–5396. PMLR.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017.
Model-agnostic meta-learning for fast adaptation of deep networks. In *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pages 1126–1135. PMLR.
Jack Goetz and Ambuj Tewari. 2020. Federated learning via synthetic data. *CoRR*, abs/2008.04489.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020.
TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4163–
4174, Online. Association for Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *3rd International Conference on Learning Representations,*
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Guang Li, Ren Togo, Takahiro Ogawa, and Miki Haseyama. 2020. Soft-label anonymous gastric xray image distillation. In 2020 IEEE International Conference on Image Processing (ICIP), pages 305–
309.
Yongqi Li and Wenjie Li. 2021. Data distillation for text classification. *CoRR*, abs/2104.08448.
Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In Proceedings of COLING
2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3093–
3102, Osaka, Japan. The COLING 2016 Organizing Committee.
Wojciech Masarczyk and Ivona Tautkute. 2020. Reducing catastrophic forgetting with learning on synthetic data. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020, pages 1019–1024. Computer Vision Foundation / IEEE.
Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016.
Supervised attentions for neural machine translation.
In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 2283–2288, Austin, Texas. Association for Computational Linguistics.
Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In *Proceedings of the ACL 2010 Conference Short Papers*,
pages 220–224, Uppsala, Sweden. Association for Computational Linguistics.
Andrea Rosasco, Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, and Davide Bacciu. 2022. Distilled replay: Overcoming forgetting through synthetic samples. In *Continual Semi-Supervised Learning*, pages 104–117, Cham. Springer International Publishing.
Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In *6th International Conference on Learning* Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth Stanley, and Jeffrey Clune. 2020. Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data. In *Proceedings of the 37th International Conference* on Machine Learning, volume 119 of *Proceedings* of Machine Learning Research, pages 9206–9216.
PMLR.
Ilia Sucholutsky and Matthias Schonlau. 2021. Softlabel dataset distillation and text dataset distillation.
In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–8.
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT:
a compact task-agnostic BERT for resource-limited devices. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 2158–2170, Online. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018a.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP:
Analyzing and Interpreting Neural Networks for NLP,
pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Kai Wang, Bo Zhao, Xiangyu Peng, Zheng Zhu, Shuo Yang, Shuo Wang, Guan Huang, Hakan Bilen, Xinchao Wang, and Yang You. 2022. Cafe: Learning to condense dataset by aligning features. In *2022* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12186–12195.
Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A. Efros. 2018b. Dataset distillation. *CoRR*,
abs/1811.10959.
Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021. MiniLMv2: Multi-head selfattention relation distillation for compressing pretrained transformers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2140–2151, Online. Association for Computational Linguistics.
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649–657.
Bo Zhao and Hakan Bilen. 2021. Dataset condensation with differentiable siamese augmentation. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings* of Machine Learning Research, pages 12674–12685. PMLR.
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2021.
Dataset condensation with gradient matching. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Yanlin Zhou, George Pu, Xiyao Ma, Xiaolin Li, and Dapeng Wu. 2020. Distilled one-shot federated learning. *CoRR*, abs/2009.07999.
## Algorithm 1 Dataset Distillation With Attention
Labels Input: Training dataset D, distribution of initial parameters p(θ0), number of outer-loop steps S, number of innerloop steps T, initial learnable learning rate η˜0, learning rate for the distilled dataset α, balanced weight for the attention loss λ.
1: Initialize distilled dataset: D˜ = {(˜xi, y˜i, a˜i)}
M
i=1 randomly 2: Initialize learnable learning rate: η˜ ← η˜0 3: for outer step s = 1, . . . , S do 4: Initialize parameters: θ0 ∼ p(θ0)
5: for inner step t = 1, . . . , T do 6: Get the t-th mini-batch of distilled data:
7: (x˜t, y˜t) ∼ D˜
8: Compute task loss Ltask = L(x˜t, y˜t, θt−1) 9: Compute attention loss L*attn* flowing Eq. 6 10: Update parameters:
11: θt+1 = θt − η˜∇θt(Ltask + λL*attn*)
12: **end for**
13: Sample a mini-batch of real data: (xs, ys) ∼ D
14: Update distilled data:
15: D ←˜ D −˜ α∇D˜L(xs, ys, θT )
16: **end for**
Output: Distilled dataset D˜ and learning rate η˜
## A Overview Of Proposed Method
Algorithm 1 illustrates an overview of our distillation algorithm.
## B Implementation Details
In our experiments, we trained the distilled datasets using Adam optimizer (Kingma and Ba, 2015) with linear warmup and linear decay learning rate schedule and gradient clipping with 1.0. Following the implementation in Wang et al. (2018b), we disabled dropout layers to avoid the randomness of the model training. We used a RTX 3090 or a RTX A6000, depending on the required memory size for each experiments. To obtain the performance of the full dataset for AGNews, which is used as the upper-bound of the distilled datasets, we fine-tuned BERTBASE model with learning rate η = 1e−5for epochs ∈ {2, 3, 4}, and adopted the best performance. More information about our implementation can be found in our source code1.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
puduppully-etal-2023-multi | Multi-Document Summarization with Centroid-Based Pretraining | https://aclanthology.org/2023.acl-short.13 | In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary. In this paper, we focus on pretraining objectives for MDS. Specifically, we introduce a novel pretraining objective, which involves selecting the ROUGE-based centroid of each document cluster as a proxy for its summary. Our objective thus does not require human written summaries and can be utilized for pretraining on a dataset consisting solely of document sets. Through zero-shot, few-shot, and fully supervised experiments on multiple MDS datasets, we show that our model \textit{Centrum} is better or comparable to a state-of-the-art model. We make the pretrained and fine-tuned models freely available to the research community \url{https://github.com/ratishsp/centrum}. |
## Multi-Document Summarization With Centroid-Based Pretraining
Ratish Puduppully1,2∗and **Parag Jain**4and **Nancy F. Chen**1,2,3and **Mark Steedman**4 1Institute for Infocomm Research (I2R), A∗STAR, Singapore 2CNRS@CREATE, Singapore 3Centre for Frontier AI Research (CFAR), A*STAR
4School of Informatics, University of Edinburgh [email protected] [email protected] [email protected] [email protected]
## Abstract
In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary. In this paper, we focus on pretraining objectives for MDS. Specifically, we introduce a novel pretraining objective, which involves selecting the ROUGEbased centroid of each document cluster as a proxy for its summary. Our objective thus does not require human written summaries and can be utilized for pretraining on a dataset consisting solely of document sets. Through zero-shot, few-shot, and fully supervised experiments on multiple MDS datasets, we show that our model Centrum is better or comparable to a state-ofthe-art model. We make the pretrained and finetuned models freely available to the research community1.
## 1 Introduction
In Multi-Document Summarization (MDS), the input is a set of documents, and the output is a summary that describes important information in a coherent and non-redundant manner (McKeown and Radev, 1995; Radev and McKeown, 1998).
In recent years, there have been significant improvements in MDS due to the availability of MDS
datasets (Fabbri et al., 2019; Gholipour Ghalandari et al., 2020; Liu* et al., 2018) and advances in pretraining approaches (Lewis et al., 2020; Raffel et al., 2020; Zhang et al., 2020).
In particular, Xiao et al. (2022) introduced a pretraining approach called PRIMERA (Pyramidbased Masked Sentence Pretraining) adapted for MDS. To create synthetic summaries, they used the Pyramid scheme (Nenkova and Passonneau, 2004),
incorporating a process of identifying and ranking entities, followed by grouping sentences containing these entities in the input documents. The sentences with the highest overlap with other documents (measured using ROUGE) in each group were masked in the input and integrated into the output, forming a synthetic summary. Xiao et al.
(2022) show that an encoder-decoder model trained on such a corpus attains strong zero-shot, few-shot, and fully supervised results on multiple datasets.
However, these synthetic summaries may lack coherence as the sentences are derived from various positions within the input documents. Furthermore, there is potential for redundancy, as sentences encapsulating similar information could be selected for inclusion in the summary.
In this paper, we propose Centrum, a pretraining objective for MDS, which is conceptually simple and overcomes these problems. The key intuition is that among a set of documents in a news cluster, the document which shares the most content with the other documents in the cluster can serve as a proxy for the summary of the document set. Such a cluster centroid is inherently coherent as it is a humanwritten document. Furthermore, because it isn't artificially assembled, it avoids content repetition.
In this paper, we pretrain Centrum on NewSHead
(Gu et al., 2020) corpus and perform zero-shot, fewshot and fully-supervised experiments on various MDS datasets. We show that Centrum performs favorably compared to PRIMERA, especially in the zero-shot and few-shot settings, where there are none or very few training examples available.
## 2 Centroid-Based Selection Of Document As Summary
Background on PRIMERA Xiao et al. (2022)
leveraged the NewSHead corpus (Gu et al., 2020), a compilation of 369,940 news clusters, for pretraining. Using the Pyramid scheme (Nenkova and Passonneau, 2004), they created synthetic summaries through a multi-step procedure. They gathered the entity mentions in the input documents and rank the entities by the count of documents in which
∗Part of the work was done when the author was at the University of Edinburgh 1https://github.com/ratishsp/centrum 128 an entity is mentioned. Next, they divide the sentences from the documents into distinct groups, such that the sentences containing an entity belong to the same group. They then extracted the sentence with the highest overlap (as quantified by ROUGE
(Lin, 2004)) with other documents from each group.
This sentence was replaced with a mask token in the input, and copied to the output document. The idea here was to leverage information from other documents to reconstruct the masked sentence. The sentences thus obtained were concatenated to form a synthetic summary.
Xiao et al. (2022) applied the method to the NewSHead corpus (Gu et al., 2020) containing news articles clustered by topic. To accommodate long document lengths, they use Longformer EncoderDecoder (LED) architecture (Beltagy et al., 2020).
LED supports sparse global attention along with dense local attention on the input. PRIMERA
employs global attention on specialized tokens
(<doc-sep>), which act as separators between the documents within the input cluster. The pretrained PRIMERA model was then used for zeroshot evaluation, few-shot or full finetuning across multiple MDS datasets.
Problems with PRIMERA pretraining PRIMERA's reference summaries consist of sentences extracted from varying positions within different documents in a cluster. This method can yield incoherent summaries, as it can be unclear which entities the sentences refer to. We illustrate this with an example of a synthetic summary created using PRIMERA in Table 1. The first sentence about Lady Gaga originates from the first document, while the second sentence mentioning Donald Trump and Elton John comes from the second document. The lack of entity mentions within these sentences disrupts the overall coherence of the summary. We also note occurrences of redundant information in the synthetic summary. Our hypothesis is that pretraining using such noisy synthetic summaries could negatively impact model performance, particularly in zero-shot or few-shot experiments.
Our Model We propose an alternate pretraining objective for MDS called as Centrum. We hypothesize that a document exhibiting the highest similarity with the rest of the documents in a cluster could serve as a proxy for its summary. This method inherently filters out documents that bear She's a fantastic person, solid as a rock and I'm very proud of her success because I really believe I had at least something to do with it." It was unclear exactly what type of records he was referring to - the attendance of 6,500 fell far short of many Elton John concerts. *. . .* (4 sent)
Donald Trump made his way to Great Falls, Montana, on Thursday (July 5), primarily to slam Democratic Sen. Jon Tester and accuse him of failing to live up to his promises in Washington. "I've broken more Elton John [attendance]
records, and I don't have a musical instrument," he boasted.
This is my only musical instrument-the mouth-and hopefully the brain is attached to the mouth. During a rally in Great Falls, Montana, where President Trump derided the
\#MeToo movement and attacked individual Democratic lawmakers, the president once again bragged about the size of his supporter turnout. "I've broken more Elton John [attendance] records, and I don't have a musical instrument," Trump said according to Yahoo News. "I don't have a guitar, or an organ. *. . .* This is my only musical instrument - the mouth - and hopefully the brain is attached to the mouth. The brain is so much more important." *. . .*
Table 1: Example of a synthetic reference summary in PRIMERA (Xiao et al., 2022). We see that the reference summaries in PRIMERA can contain instances of incoherence and repetition. In this summary, the first sentence is about Lady Gaga, and the second is about Donald Trump and Elton John. The subjects of the first two sentences (highlighted in orange) are unclear due to the lack of named entities. Additionally, the sentences in brown and red contain repetitive information.
only a distant relation to other documents in the cluster. Furthermore, it addresses potential noise present in automatically created multi-document cluster datasets (Gu et al., 2020), for example, a document falsely associated with a cluster. The Centrum pretraining objective excludes such noise, as a mismatched document would not be chosen as the cluster centroid. Among the documents, a document may have more relevant content than others. The Centrum objective will select the more relevant document as the summary.
Drawing inspiration from Gu et al. (2020), we designate a document as the summary if it maximizes the semantic match with other documents in the cluster. Specifically, from each document set D in an instance, the best candidate summary yˆ is selected as:
$${\hat{y}}={\underset{x\in{\mathcal{D}}}{\operatorname{arg\,max}}}\,1/|{\mathcal{D}}|\sum_{x^{\prime}\in D\setminus\{x\}}f(x,x^{\prime})\quad{\mathrm{~(1)}}$$
where f(*x, x*′) represents the semantic match of summary x with document x′. A model can be trained to learn this function f. In our approach, we employ the average of ROUGE1, ROUGE2, and ROUGEL as this function. Our pretraining corpus is constructed by treating D \ {yˆ} as the 129 input and yˆ as the output.
Vogler et al. (2022) recently employed a comparable strategy for unsupervised MDS. However, our approach differs from theirs by applying this strategy for MDS task-specific pretraining. Moreover, following Xiao et al. (2022), we employ the LED architecture for handling long document context in the input.
## 3 Experimental Setup
Model We utilize the Transformers (Wolf et al.,
2020) library to conduct our experiments. Similar to Xiao et al. (2022), we adopt the large configuration of LED, comprising 459M parameters. We finetune the LED model on the NewSHead corpus (Gu et al., 2020) with our Centrum pretraining objective. Documents within a cluster are concatenated into a single text sequence, with <doc-sep>
tokens employed as separators. We apply global attention to the <doc-sep> tokens, while local attention is used for the remaining tokens. Further details about the hyperparameter settings can be found in Appendix C.
Datasets We conduct our evaluation on the MultiNews (Fabbri et al., 2019), WCEP (Gholipour Ghalandari et al., 2020), and DUC 2007 datasets, comparing zero-shot, few-shot, and fully-supervised results. DUC 2007 comprises 45 examples, 20 of which we designate as the test set (Xiao et al.,
2022).
Preprocessing of NewSHead Dataset We apply the following criteria when preprocessing the dataset:
- **Minimum Document Count in a Cluster**:
We require that a news cluster must contain a minimum of three documents, allowing a document to serve as a summary for the remaining documents in the cluster. Clusters not meeting this requirement are excluded.
- **Minimum Summary Size**: We hypothesize that a significant variance in summary lengths during pretraining could hurt performance.
Therefore, we ensure that candidate summaries during pretraining are not too short, setting a minimum requirement of 250 tokens. Clusters not meeting this requirement are also excluded. In contrast, Xiao et al. (2022) can control the length of their synthetic reference summaries, ensuring that the sentence count in the synthetic summary constitutes at least 30% of the total sentences in the cluster.
Additional preprocessing steps are outlined in Appendix D. After applying these criteria, we retain 172K clusters, approximately 45% of the total clusters in the NewSHead corpus (Gu et al., 2020).
Comparison models In addition to the reported scores of PRIMERA (denoted as PRIMERA* in Table 2), we independently reproduce the PRIMERA
model scores by running inference using the PRIMERA checkpoints available in the Transformers (Wolf et al., 2020) library. Similar to the findings of Giorgi et al. (2022), we note that our reproduced scores are lower than those reported by Xiao et al. (2022), an exception being the zeroshot results for the WCEP dataset. For a broader comparison, we also consider the Pegasus model proposed by Zhang et al. (2020). Pegasus is a pretrained model focusing on single-document summarization (SDS), which obtains strong results on multiple SDS datasets such as XSum (Narayan et al., 2018) and CNN-DailyMail (Hermann et al.,
2015).
## 4 Results
We conduct experiments in three settings: zeroshot, few-shot, and fully supervised.
Zero-shot In the zero-shot setting, we evaluate our pretrained Centrum model on the test datasets of Multi-News, WCEP, and DUC 2007. Following Xiao et al. (2022), the output length of the summary is set as the average length of the gold summaries of the test datasets. As Table 2 illustrates, Centrum outperforms the PRIMERA model in terms of ROUGE scores across all three datasets.
Few-shot In the few-shot setting, we follow the approach of Xiao et al. (2022) by conducting two sets of experiments. We randomly select 10 and 100 examples from the training set for model finetuning, and an equivalent number of examples from the validation set. To account for potential variance in scores due to example selection, we repeat this process five times with different seeds.
We observe that the summaries generated by Centrum are, on average, longer than those produced by PRIMERA. This is primarily a result of the Centrum pretraining objective, which imposes a minimum summary length of 250 tokens. In contrast, PRIMERA synthetic summaries are restricted
| Zero Shot | 10 Examples | 100 Examples | | | | | | | |
|------------------|---------------|----------------|------|------|------|------|------|------|------|
| System | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL |
| Multi-News (256) | | | | | | | | | |
| Pegasus* | 32.0 | 10.1 | 16.7 | 39.0 | 12.1 | 20.3 | 43.0 | 13.5 | 21.1 |
| PRIMERA* | 42.0 | 13.6 | 20.8 | 44.0 | 15.5 | 22.0 | 46.0 | 16.8 | 22.9 |
| PRIMERA | 41.6 | 13.1 | 19.9 | 43.4 | 15.3 | 21.6 | 45.2 | 16.3 | 22.7 |
| Centrum | 43.5 | 15.7 | 22.4 | 43.4 | 16.6 | 22.2 | 45.7 | 16.8 | 23.2 |
| WCEP (50) | | | | | | | | | |
| Pegasus* | 33.2 | 12.7 | 23.8 | 35.6 | 14.8 | 26.8 | 42.1 | 19.9 | 33.0 |
| PRIMERA* | 28.0 | 10.3 | 20.9 | 39.0 | 17.6 | 30.6 | 43.0 | 20.5 | 33.9 |
| PRIMERA | 32.9 | 12.1 | 23.4 | 37.0 | 15.8 | 28.2 | 42.4 | 20.5 | 33.4 |
| Centrum | 35.7 | 14.2 | 25.8 | 38.2 | 17.0 | 29.5 | 42.0 | 20.1 | 33.0 |
| DUC2007 (250) | | | | | | | | | |
| Pegasus | 22.7 | 4.2 | 12.8 | 23.1 | 3.5 | 15.2 | - | - | - |
| PRIMERA | 31.9 | 5.4 | 14.2 | 34.6 | 6.6 | 15.2 | - | - | - |
| Centrum | 32.7 | 5.7 | 15.0 | 35.3 | 7.7 | 16.8 | - | - | - |
| System | R1 | R2 | RL |
|----------|------|------|------|
| PRIMERA* | 49.9 | 21.1 | 25.9 |
| PRIMERA | 50.0 | 20.6 | 25.5 |
| Centrum | 49.0 | 20.4 | 25.4 |
to a maximum length equating to 30% of the input set. To ensure a fair comparison, we truncate the summaries in the few-shot setting to match the lengths assigned in the zero-shot setting.
Table 2 presents the average scores obtained over the five seeds. Given that the DUC 2007 dataset contains only 45 examples, results are reported for training and validation with 10 examples.
From the results, we see that Centrum outperforms PRIMERA across all datasets when finetuned with 10 examples. Furthermore, Centrum maintains performance parity with PRIMERA when finetuned using 100 examples.
Fully supervised In this setting, the pretrained models are finetuned on the training split of the Multi-News dataset. As reported in Table 3, the results from the fully-supervised experiments demonstrate that Centrum performs on par with PRIMERA on the Multi-News dataset.
Human Evaluation To complement the automatic evaluation results, we conduct a human evaluation study. Three professional linguists are tasked with comparing the outputs of Centrum,
![3_image_0.png](3_image_0.png)
PRIMERA, and Pegasus using the DUC 2007 dataset, and are compensated at rates higher than local minimum wages. The evaluation focuses on three metrics as outlined by Angelidis et al. (2021):
informativeness (which assesses the consistency between model output and the human reference summary), coherence (which evaluates the ordering of information in the summary), and non-repetition
(where a higher-quality summary exhibits fewer repetitions of words, phrases, or sentences).
The evaluators are presented with three summaries from the three models, randomly ordered, along with the reference summary. They are then instructed to rank the summaries from best (+1)
to worst (-1) for each of the three metrics. These rankings are summed and scaled by the number of examples (20), producing scores that range from 100% (best) to –100% (worst). The results of this human evaluation are presented in Table 4.
Our findings indicate that Centrum significantly outperforms Pegasus across all three metrics, as confirmed by a one-way ANOVA with a post-hoc Tukey test (p ≤ 0.05). In comparison to PRIMERA,
Centrum is significantly better in terms of informativeness and performs comparably in terms of coherence and non-repetition. Pegasus, on the other hand, is marked by heavy repetition within its summaries, which likely accounts for its lower scores.
![4_image_0.png](4_image_0.png)
Table 4: Human evaluation results for the DUC2007 dataset, with higher scores being preferable. We compare the Pegasus, PRIMERA, and Centrum models across three metrics: informativeness (Inform), coherence (Coh), and avoidance of repetition (Rep). Results that are statistically significantly different from Centrum are marked with an asterisk (*).
## 5 Conclusion
We propose a centroid-based pretraining objective for multi-document summarization. Through experiments, we see that our model Centrum outperforms the existing state-of-the-art model PRIMERA on zero-shot settings and is comparable with PRIMERA in few-shot and supervised settings.
## 6 Limitations
As mentioned in the main paper, one of the limitations of our Centrum model is that it tends to produce longer outputs in comparison to PRIMERA.
This necessitates controlling the length of the summary by truncating to a desired length. Moreover, due to our requirement of at least three documents in a cluster for centroid computation, we are unable to utilize clusters of only two documents present in Gu et al. (2020). This constraint significantly reduces the utilizable corpus size, leading us to work with roughly 45% of the corpus size used by PRIMERA. Future research could explore the possibility of initializing Centrum with the gap sentence generation-based Pegasus (Zhang et al., 2020)
single document summarization objective, potentially allowing for full utilization of the corpus size of Gu et al. (2020).
## Acknowledgements
This research was supported by funding from the Institute for Infocomm Research (I2R) under A*STAR ARES, Singapore, and by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE)
programme. The work was supported in part by ERC Advanced Fellowship GA 742137 SEMANTAX and the University of Edinburgh Huawei Laboratory. Parag is supported by Huawei and the UKRI Centre for Doctoral Training in Natural Language Processing (grant EP/S022481/1). We thank the anonymous reviewers for their constructive feedback.
## References
Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021.
Extractive opinion summarization in quantized transformer spaces. Transactions of the Association for Computational Linguistics, 9:277–293.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020.
Longformer: The long-document transformer.
Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics.
Demian Gholipour Ghalandari, Chris Hokamp, Nghia The Pham, John Glover, and Georgiana Ifrim. 2020. A large-scale multi-document summarization dataset from the Wikipedia current events portal. In *Proceedings of the 58th Annual Meeting of the* Association for Computational Linguistics, pages 1302–1308, Online. Association for Computational Linguistics.
John Giorgi, Luca Soldaini, Bo Wang, Gary Bader, Kyle Lo, Lucy Lu Wang, and Arman Cohan. 2022. Exploring the challenges of open domain multi-document summarization.
Xiaotao Gu, Yuning Mao, Jiawei Han, Jialu Liu, Hongkun Yu, You Wu, Cong Yu, Daniel Finnie, Jiaqi Zhai, and Nicholas Zukoski. 2020. Generating Representative Headlines for News Stories. In *Proc. of* the the Web Conf. 2020.
Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693–
1701.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021.
Datasets: A community library for natural language processing. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Peter J. Liu*, Mohammad Saleh*, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations.
Kathleen McKeown and Dragomir R. Radev. 1995.
Generating summaries of multiple news articles. In Proceedings of the 18th Annual International ACM
SIGIR Conference on Research and Development in Information Retrieval, SIGIR '95, pages 74–82, New York, NY, USA. Association for Computing Machinery.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics:
HLT-NAACL 2004, pages 145–152, Boston, Massachusetts, USA. Association for Computational Linguistics.
Dragomir R. Radev and Kathleen R. McKeown. 1998.
Generating natural language summaries from multiple on-line sources. *Computational Linguistics*,
24(3):469–500.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67.
Nikolai Vogler, Songlin Li, Yujie Xu, Yujian Mi, and Taylor Berg-Kirkpatrick. 2022. An unsupervised masking objective for abstractive multi-document news summarization. *CoRR*, abs/2201.02321.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: Pyramid-based masked sentence pre-training for multi-document summarization. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 5245–5263, Dublin, Ireland. Association for Computational Linguistics.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization.
In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 11328–11339. PMLR.
## A Potential Risks
Despite our model's potential, there is a risk that the generated summaries might not accurately represent the input document due to noise present in the training and finetuning examples. At the same time, we believe that our Centrum pretraining strategy doesn't affect the factuality of the model either positively or negatively compared to Xiao et al. (2022).
Future research will aim to explicitly evaluate and improve the factuality of our model's output.
## B Details Of The Datasets
Table 5 provides detailed information about the datasets used in our study. The NewSHead, MultiNews, and DUC 2007 datasets all originate from the news domain, while the WCEP dataset is derived from the Wikipedia Current Events Portal.
## C Hyperparameter Details
Our hyperparameters are similar to Xiao et al.
(2022). We train for 100K steps with a learning rate of 3e-5. We evaluate every 500 steps and earlystop on the validation perplexity with a patience of
| Name | #Ex | #Doc/C | #Ldoc | #Lsumm |
|--------------------|-------|----------|---------|----------|
| NewSHead | 177K | 4.2 | 1692 | 484 |
| (2020) Multi-News | 56K | 2.8 | 1793 | 217 |
| (2019) WCEP (2020) | 10K | 9.1 | 3866 | 28 |
| DUC 2007 | 45 | 25 | 540 | 250 |
Table 5: Characteristics of the datasets utilized in this paper. The notations are as follows: \#Ex represents the number of examples, \#Doc/C is the average number of documents per cluster, \#Ldoc signifies the average token count in the input, and \#L*summ* indicates the average token count in the summary. Values associated with the Multi-News and WCEP datasets are sourced from Xiao et al. (2022).
50. Pretraining Centrum on a 4-node A100 GPU
took around 4 days. We computed the results using ROUGE (Lin, 2004) library 2 with the default settings and '–use_stemmer' argument.
## D Additional Preprocessing Steps
- **Removing boilerplate text from summaries**:
We remove boilerplate text such as "Sorry, this video isn't available any more.", "Advertisement Story continues below" from the summary sentences using regular expression based cleaning.
- **Truncation of documents**: We truncate each document in the cluster to the maximum length of source context allowed in LED divided by the count of the documents in the cluster. Thus, each document has a proportional representation in the cluster, similar to Xiao et al. (2022).
## E Software And Licenses
Our model relies on datasets downloaded from HuggingFace datasets (Lhoest et al., 2021)
(Apache 2.0). We release our models under the Apache 2.0 license.
## F Human Evaluation
Figures 1 and 2 show the screenshots of the user interface presented to the raters.
General Instructions
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
information mentioned in the reference summary.
![7_image_2.png](7_image_2.png)
![7_image_3.png](7_image_3.png)
![7_image_4.png](7_image_4.png)
following criterion:
- Coherence: How coherent is the summary? How natural is the ordering of the information? The summary should be well structured and
well organized and have a natural ordering of the information.
sentences that are repeated, or repeated information, or the repeated use of a phrase
![7_image_5.png](7_image_5.png)
- Avoid Repetition: Which summary is better at avoiding unnecessary repetition? Unnecessary repetition might take the form of whole
Below we give examples of how to choose better summary based on informativeness, coherence and avoidance of repetition. We give an example for measuring each metric.
Example for choosing more informative summary Reference Summary Morris Dees was co-founder of the Southern Poverty Law Center (SPLC) in 1971 and has served as its Chief Trial Counsel and Executive Director. The SPLC participates in tracking down hate groups and publicizing their activities in its Intelligence Report, teaching tolerance and bringing lawsuits against discriminatory practices and hate groups. As early as 1973 the SPLC won a federal case which forced funeral homes throughout the U.S. to provide equal services to blacks and whites. In 1991 it started a classroom program "Teaching Tolerance" which features books, videos, posters and a magazine that goes to more than 400,000 teachers. It also funded a civil rights litigation program in Georgia to provide free legal assistance to poor people. The SPLC's most outstanding successes, however, have been in its civil lawsuits against hate groups. Dees and the SPLC have fought to break the organizations by legal action resulting in severe financial penalties. Described as "wielding the civil lawsuit like a Buck Knife, carving financial assets out of hate group leaders," the technique has been most impressive: 1987-$7 million against the United Klans of America in Mobile, Alabama; 1989-$1 million against Klan groups in Forsyth County, Georgia; 1990-$9 million against the White Aryan Resistance in Portland, Oregon; and 1998-$20 million against The Christian Knights of the Ku Klux Klan in Charleston, South Carolina. But despite these Judgments the Ku Klux Klan and White Aryan Resistance have survived.
Summaries A: CHARLESTON, S.C. (AP) -- The city council has declared the Ku Klux Klan a terrorist group and condemned other hate groups as well in a resolution approved after an emotional debate. The resolution was approved on a voice vote Tuesday after council members were assured it would not infringe on anyone's constitutional rights. * 'The United States has a no-tolerance policy for terrorist organizations," said councilman Kwadjo Campbell, * 'This is an unprecedented move, undoubtedly." Mark Potok, a spokesman for the Southern Poverty Law Center in Montgomery, Ala., said this may be the first municipality to designate the Klan a terrorist group. The Southern Poverty Law Center represented the predominantly black Macedonia Baptist Church in Clarendon County which won a $21.5 million judgment against the Christian Knights and Klan members after the church was burned four years ago. The resolution only expresses the opinion of council and doesn't create law, city attorneys said. If the Klan wants to march, they may apply for a permit. B: COEUR D'ALENE, Idsho (AP) -- A lawyer who specializes in bankrupting hate groups is going after the Aryan Nations, whose compound in the Idaho woods has served as a clubhouse for some of America's most violent racists. In a lawsuit that goes to trial Monday, attorney Morris Dees of the Southern Poverty Law center is representing a mother and son who were attacked by security guards for the white supremacist group. The victims are suing the Aryan Nations and founder Richard Butler. "Put them out of business, that's what we try to do," Dees said when the lawsuit was filed last year. He has declined additional comment. Butler said the lawsuit was brought by the subsuit of the subsuits to destroy the finances of hate groups. In six such lawsuits, the Montgomery, Ala., lawyer has never lost. In 1987, Dees won a $7 million verdict against a Ku Klux Klan organization over the slaving of a 19-year old black man in Mobile, Ala., forcing the group to turn over its headquarters building. In 1990, he won $9 million in Portland, Ore., against the White Aryan Resistance in the beating death of a black man by neo-Nazi skinheads.
Answers Informativeness Best: B
Worst: A.
Analysis Informativeness. The information overlapping between model output and reference summary is highlighed in blue. Summary B contains more information consistent with the reference summary. Thus, Summary B is best Figure 1: Instructions for human evaluation Example on choosing a better summary for coherence Below are two model generated summaries. The example is about choosing a more coherent summary A: MOBILE, Ala. (AP) -- A white supremacist arrested after buying hand grenades from an undercover agent said he wanted to send mail bombs to Washington and Montgomery, authorities said. Chris Scott Gilliam said he didn't want to be like the Unabomber, who killed three person and wounded several others, a federal agent testified at a federal court hearing.
Firearms, said Friday. There was no immediately in the form of the same target the property and the national and state capital cities. An ATF
representative did not immediately return calls for comment Saturday. Gilliam, 27, was charged with possessing an unregistered firearm found at his home in Foley along with what agents said was apparently a silencer.
B: MOBILE, Ala. (AP) -- There was no immediate indication what agencies or people might have been targeted in the national and state capital cities. An ATF
representative did and wounded several others, a federal agent testified at a federal court hearing. A white supremacist arrested after buying hand grenades from an undercover agent said he wanted to servl mail bombs to Washington and Montgomery, authorities said. Gilliam, 27, was charged with possessing an unregistered frearm found at his home in Foley along with what agents said was apparently a silencer. ' ' He wanted to kill everybody," Devid Pesqualotto, a special agent with the Bureau of Alcohol, Tobacco and Firearms, said Friday.
Answers Coherence Best: A
Worst: B
Analysis Coherence. Summary A contains the information in a coherent manner. In Summary B , in contrast, the information is ordered in a less natural way Thus, Summary A is best Below are two model generated summaries. The example is about choosing a summary which is less repetitive A: COEUR D'ALENE, Idaho (AP) -- A jury on Thursday avanded $6.3 million to a woman and her son who were attacked by Aryan Nations guards outside the white supremaciet group's supervision of the security guards who assaultad Victoria and Jason Keeman two years ago. The Keenanu' attarmey, Morris Deea, had asked the Jury to award more than $11 millio punitive damages. Deen, of the Nongomery, Ala. chased Southern Poverty Law Center, has said he hoped the perialty would be severe enough to benorupt the Aryan Nations. The 82 vent damager, but the national state some below the same decision of references. This is nother refered to the sta to the Aryan Nations' compound B: COEUR D'ALENE, Idaho (AP) -- A jury on Thursday avarded $6.3 million to a woman and her son who were attacked by Aryan Nations gards other the while supremacis group in supervision of the security guards line associated Victoria and Jason Kaerian two years age.() The Kaenans' attorney, Horris Daes, had asked the Jury to award more than $11 m punitive damages. Dees, of the Montgomery, Ala-based Southern Poverty Law Center, has said he hoped the penalty would be severe enough to benkrupt the Aryan Nations. The Jack fold the American between Russian and South American and the same again the relation of the relations of the relations of the relations of the seats of the relations of the r after they stopped to search for a dropped wallet near the entrance to the Aryan Nationa' compound.
Answers Avoiding Repetition Bert: A
Worst: B
Analysis Avoiding repetition. Summary A is the best as Summary B contains repetitive information ↓ such as phrases (i) and (ii).
Figure 2: Instructions for human evaluation (continued)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Appendix A
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 Models And Datasets
✓ B1. Did you cite the creators of artifacts you used?
Section 3 Models and Datasets, and Section B of Appendix
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section E
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section B of Appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section B
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3, Section C of Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3, Section C of Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section C of Appendix
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 4 (Human Evaluation)
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 4 (Human evaluation) and Section F in Appendix
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 4 (Human evaluation)
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section F in Appendix D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
de-varda-marelli-2023-scaling | Scaling in Cognitive Modelling: a Multilingual Approach to Human Reading Times | https://aclanthology.org/2023.acl-short.14 | Neural language models are increasingly valued in computational psycholinguistics, due to their ability to provide conditional probability distributions over the lexicon that are predictive of human processing times. Given the vast array of available models, it is of both theoretical and methodological importance to assess what features of a model influence its psychometric quality. In this work we focus on parameter size, showing that larger Transformer-based language models generate probabilistic estimates that are less predictive of early eye-tracking measurements reflecting lexical access and early semantic integration. However, relatively bigger models show an advantage in capturing late eye-tracking measurements that reflect the full semantic and syntactic integration of a word into the current language context. Our results are supported by eye movement data in ten languages and consider four models, spanning from 564M to 4.5B parameters. | # Scaling In Cognitive Modelling: A Multilingual Approach To Human Reading Times
Andrea Gregor de Varda University of Milano - Bicocca [email protected]
## Abstract
Neural language models are increasingly valued in computational psycholinguistics, due to their ability to provide conditional probability distributions over the lexicon that are predictive of human processing times. Given the vast array of available models, it is of both theoretical and methodological importance to assess what features of a model influence its psychometric quality. In this work we focus on parameter size, showing that larger Transformerbased language models generate probabilistic estimates that are less predictive of early eyetracking measurements reflecting lexical access and early semantic integration. However, relatively bigger models show an advantage in capturing late eye-tracking measurements that reflect the full semantic and syntactic integration of a word into the current language context.
Our results are supported by eye movement data in ten languages and consider four models, spanning from 564M to 4.5B parameters.
## 1 Introduction
The role of context-dependent statistical information in human language processing has received considerable attention in cognitive modelling. A
solid empirical finding that has emerged from this research line is that speakers actively anticipate the upcoming linguistic material (Huettig, 2015; Staub, 2015). Indeed, behavioral and neural patterns that are diagnostic of reduced cognitive cost have been reported in response to predictable words; these emerged from the analysis of eye movements
(Staub, 2015; Ehrlich and Rayner, 1981), changes in pupil size (Frank and Thompson, 2012), selfpaced reading times, (Frank and Hoeks, 2019; Fernandez Monsalve et al., 2012), ERP responses (DeLong et al., 2005; Van Berkum et al., 2005; Kwon et al., 2017), frontotemporal blood oxygenation levels (Baumgaertner et al., 2002; Dien et al., 2008),
and MEG data (Takahashi et al., 2021).
Marco Marelli University of Milano - Bicocca [email protected] Inferential theories of language comprehension argue that prediction must be an intrinsic feature of an incremental probabilistic cognitive processor (Levy, 2008; Shain et al., 2022). These accounts contend that the Kullback-Leibler (KL) divergence (i.e., relative entropy) between the probabilistic state of the processor before and after observing a given word is the cause of the processing difficulty associated with that word. It has been demonstrated that the KL divergence associated with this probability shift is mathematically equivalent to the *surprisal* of that word, i.e., the negative logarithm of its probability conditioned by the preceding sentence context (*surprisal*(wi) =
− log P(wi|w1, w2 *. . . w*i−1); Levy, 2008). Inferential theories, which predict a logarithmic linking function between contextual predictability and cognitive cost, are supported by extensive experimental evidence in the computational psycholinguistics literature (Smith and Levy, 2008, 2013; Wilcox et al.,
2020; Shain et al., 2022, but see Hoover et al., 2022; Brothers and Kuperberg, 2020).
Statistical language models developed in NLP
research have been of paramount importance in the evolution of inferential theories of language comprehension. Indeed, language models are usually trained to predict the upcoming word in a corpus of naturalistic text, and thus define a conditional probability distribution that can be employed to compute word surprisal. Modern computationallyderived estimates of word predictability have been shown to perform on par (Shain et al., 2022) or even better (Hofmann et al., 2022; Michaelov et al.,
2022) than predictability estimates obtained with expensive human annotation (although they fail to account for the processing demands of some specific linguistic patterns, see Arehalli et al., 2022; Van Schijndel and Linzen, 2021; Hahn et al., 2022).
However, given that language models display a great amount of variation in their architectures and performances, various studies have investigated 139 which models are better suited to characterize the behavioral correlates of human sentence comprehension. Seminal work has shown that the "linguistic accuracy" of a model (i.e., its ability to accurately predict the next word) is positively related to its "psychological accuracy" (namely, the capability of a surprisal estimate to explain variance in human responses, as captured by the increase in fit in a corresponding statistical model; Goodkind and Bicknell, 2018; Wilcox et al., 2020; Merkx and Frank, 2021, but see Hao et al., 2020; Kuribayashi et al., 2021).
A recent incidental finding by Shain et al. (2022)
shed doubt on such conclusion. The authors reported that the GPT-2small model substantially outperformed GPT-3 in predicting self-paced reading times and fixation patterns while having a parameter size smaller by three degrees of magnitude and displaying higher perplexity values in next-word prediction. The result, which suggests that the correlations between the linguistic and psychological accuracy of language models might not hold for very deep transformer-based architectures, has been promptly replicated with different GPT-2 variants (Oh et al., 2022; Oh and Schuler, 2022). This observation is at odds with the empirical scaling laws for neural language models (Kaplan et al.,
2020), which show that the quality of a language model (both in terms of test loss and downstream performance, Hernandez et al., 2021) increases monotonically as the number of parameters increases (although see Lin et al., 2022).
## 2 Related Work And Motivation
Research in computational psycholinguistics has largely followed the progressive switch to the Transformer architecture that has characterized the NLP literature in the last years, with Transformerbased surprisal estimates being evaluated as predictors of processing difficulty (Wilcox et al., 2020; Hao et al., 2020; Merkx and Frank, 2021). While early studies within this research line have documented a positive relationship between the linguistic and the psychological accuracy of a model
(Goodkind and Bicknell, 2018; Wilcox et al., 2020; Merkx and Frank, 2021), recent findings with decoder-only large language models have documented an opposite pattern, with larger and betterperforming pre-trained Transformers providing worse psychometric estimates than their smaller counterparts (Oh et al., 2022; Oh and Schuler,
## 2022).
The possibility that cognitive modelling might constitute an exception to scaling laws is intriguing, but further examination is needed to warrant such claims. All the evidence in support of this view has come from the English language alone
(except from Kuribayashi et al., 2021), leaving an open question as to the cross-lingual generalizability of these findings. The English-centric approach to this problem is not surprising, since inferential approaches to language processing have been primarily supported by experimental evidence in English (Aurnhammer and Frank, 2019; Frank and Bod, 2011; Frank et al., 2015; Fernandez Monsalve et al., 2012; Wilcox et al., 2020; Goodkind and Bicknell, 2018; Smith and Levy, 2013), Dutch
(Frank and Hoeks, 2019; Brouwer et al., 2010) and German (Boston et al., 2008; Brouwer et al., 2021),
while empirical support from non-Germanic languages is far more limited (although see Fan and Reilly, 2020; Kuribayashi et al., 2021). To the best of our knowledge, there is only one study that provided large-scale cross-lingual evidence in support of surprisal theory (de Varda and Marelli, 2022).
Indeed, both NLP (Joshi et al., 2020) and cognitive science research (Blasi et al., 2022) have long overrelied on the English language to develop language processing systems and test theories of language and cognition. This tendency can lead to hasty claims of generality, and must be mitigated with cross-linguistic research efforts challenging the universality of English-specific findings.
Another potential shortcoming of the studies that reported the inverse scaling trend is that they only considered a single eye-tracking measurement as an index of processing cost (Oh et al., 2022; Oh and Schuler, 2022). This choice reflects a common tendency within the inferential language processing framework (Aurnhammer and Frank, 2019; Goodkind and Bicknell, 2018; Smith and Levy, 2013; Wilcox et al., 2020); however, natural reading is an ability composed of multiple sub-processes characterized by different levels of complexity (see for instance Plaut et al., 1996; Coltheart et al., 2001).
In principle, it is reasonable to assume that different processing stages, characterized by different degrees of complexity, might be better captured by models with varying parameter sizes, with shallow processes better modelled by (relatively) simpler networks, and complex integrative operations better characterized by more complex architectures.
## 3 Aims
The current work aims at inspecting the relationship between the linguistic and the psychological accuracy of a neural language model across languages, testing whether previous observations on inverse scaling in cognitive modelling hold across a sample of ten languages belonging to four different families. Furthermore, our study considers different eye-tracking measures that are thought to reflect different processing stages, to examine the possibility that the relationship between the psychological and linguistic accuracy of a model might vary as a function of the computational complexity of the cognitive operations being studied.
## 4 Methods And Materials 4.1 Data
In this study, we considered the eye movement data from the MECO-L1 corpus (Siegelman et al.,
2022), a large-scale repository of eye-tracking records covering 13 languages. Participants engaged in a naturalistic reading task, and were presented with 12 texts consisting of encyclopedic entries on a handful of topics; five of the twelve original texts were translated from English to the target languages, while the other seven were nontranslated texts on the same topics and with the same writing styles, comparable length, and similar difficulty. Data points that showed either very short first fixation durations (< 80 ms) or very long total fixation times (top 1% of the participant-specific distribution) were discarded. We analyzed three measures of eye movement behavior for each word wi, which are thought to reflect early, intermediate, and late stages of processing:
1. *First fixation (FF):* the time elapsed during the first fixation on wi. This measure is often assumed to reflect low-level oculomotor processes, early lexical access, and predictive processing (Demberg and Keller, 2008; Staub, 2015).
2. *Gaze duration (GD):* the sum of the fixations landing on wi before the gaze leaves the word for the first time. This measure is thought to be indicative of lexical access, and possibly of early syntactic and semantic integration (Inhoff and Radach, 1998; Rayner, 1998).
3. *Total reading time (TT):* the total amount of time spent looking at wi, including fixations returning to the word after having left it. This measure is thought to reflect full semantic integration (Radach and Kennedy, 2013) and syntactic integration and reanalysis (Meseguer et al., 2002).
## 4.2 Models
In this study, we employed the XGLM family of auto-regressive language models (Lin et al., 2021).
XGLMs are Transformer-based, decoder-only language models inspired by GPT-3 (Brown et al.,
2020). We considered four pre-trained models, with 564M, 1.7B, 2.9B, and 4.5B parameters, and extracted word-by-word surprisal estimates from each of them. In the case of multi-token words, we summed the log probabilities assigned to the sub-word tokens, following the chain rule.
## 4.3 Analyses
Of the 13 languages included in the MECO dataset we had to exclude the Hebrew, Dutch, and Norwegian data, since these languages were not included in the XGLM pre-training data. Thus, our analyses were conducted in ten languages belonging to four language families (see Appendix A). On average, there were 65,450.8 available data points for each language (SD = 19,712.2). We fit 120 linear1 mixed-effects regression models (10 languages ×
4 models × 3 fixation measurements), with random intercepts for participants and items. We included as linear covariates length, log-frequency, and their interaction relative to wi, wi−1, and wi−2, to account for spillover effects. Our models also included a main effect of surprisal relative to wi, wi−1, and wi−2. All the variables were standardized before being entered into the mixed-effects regression models.
To evaluate the increase in the goodness of fit due to the inclusion of surprisal as a fixed effect, we compared each model with a corresponding baseline model, which was identical except for the absence of the fixed effects of surprisal. As common practice in the literature, we calculated the difference in the log likelihood between the baseline and the experimental model (∆LogLik; Goodkind and Bicknell, 2018; Wilcox et al., 2020; Kuribayashi et al., 2021; Oh and Schuler, 2022). In the literature we have reviewed in §1, a common approach was to correlate the perplexity of a language model with the ∆LogLik obtained by adding the surprisal terms; however, perplexity values can 1Our choice of fitting linear models is supported by ample evidence showing that the functional form of the effects of log-probabilities on reading times is indeed linear (see Smith and Levy, 2008, 2013; Wilcox et al., 2020; Shain et al., 2022)
![3_image_0.png](3_image_0.png)
be properly compared only in the context of a fixed reference vocabulary (Wilcox et al., 2020). Technically, XGLM models produce a conditional probability distribution over the same whole vocabulary, regardless of the language of the specific text they are processing. However, the models have received strong evidence during pre-training that some subportions of the vocabulary (e.g. Cyrillic tokens)
should be essentially ignored while processing text in some languages (e.g. English), thus reducing their *actual* reference vocabulary. Hence, while we report the perplexity-based results in Appendix B,
we focused on the link between the linguistic and psychological accuracy of the models by observing how the ∆LogLik was affected by the parameter size of the model. The choice of employing parameter size as a proxy of linguistic accuracy is supported by the results in the original XGLM paper, where the authors reported better results in almost all downstream tasks with the bigger versions of the XGLM model family (Lin et al., 2021).
The code employed in this study is publicly available2.
## 5 Results
The first main finding of our study is that surprisal is a solid predictor of reading times across the languages considered, confirming the previous observation that context-dependent probabilistic processing generalizes beyond the Germanic language sample typically considered in the literature
(de Varda and Marelli, 2022). The XGLM-based surprisal estimates were statistically significant in all cases when considering GD and TT, and in the vast majority of the cases when considering FF (see 2https://github.com/Andrea-de-Varda/
surprisal-across-languages
## Appendix A).
The increase in goodness of fit that could be attributed to surprisal is displayed in Figure 1, grouped by model type and fixation measure. Concerning FF (1a), we reported a general decrease in
∆LogLik when increasing the number of parameters, with the smallest XGLM564M variant outperforming the bigger models in terms of psychological accuracy. A similar trend can be observed in GD (1b), although the difference in psychological accuracy between XGLM564M and XGLM1.7B
appears to be rather small3. The results are different when considering TT as the dependent variable
(1c), as in this case the model that provided the highest average increase in goodness of fit was XGLM1.7B
4.
## 6 Discussion
In this experiment, we showed that large multilingual Transformer-based models were outperformed by their smaller variants in predicting early eye movement measurements of processing difficulty.
These measurements are thought to reflect predictive processes, lexical access, and early semantic integration. This result corroborates the previous claims that cognitive modelling might constitute an exception to empirical scaling laws in NLP (Oh and Schuler, 2022). However, predictability estimates computed by *relatively* larger variants of the same architecture - but not the largest - provided surprisal estimates that better captured late eye-tracking measurements, which are thought to reflect the full semantic and syntactic integration of a word into the phrasal context. This dissociation is in line with the observation that it is not appropriate to adopt a "one-size-fits-all" approach when studying how linguistic distributional knowledge explains different cognitive processes (Wingfield and Connell, 2022). Instead, context-dependent probabilistic information derived from different neural architectures might be more apt to model certain cognitive mechanisms, depending on the computational complexity of the processes being considered.
## Limitations
This work complemented previous analyses on the link between the linguistic and psychological accuracy of a neural language model by expanding the language sample to ten typologically distinct languages. However, our sample of neural language models was limited with respect to the literature focusing exclusively on English (Oh et al., 2022; Oh and Schuler, 2022; Shain et al., 2022). This problem cannot be overcome at the present state of affairs, since there are very few available massively multilingual auto-regressive language models, and the only one with sufficient coverage of our language sample was XGLM. This problem is an expression of a general difficulty in NLP to conduct experimental research on low-resource languages, due to the extreme skewness in the distribution of available resources (Joshi et al., 2020). However, we are confident that future developments in natural language engineering will support an additional test of our hypotheses with a more representative sample of models.
## References
Suhas Arehalli, Brian Dillon, and Tal Linzen.
2022. Syntactic surprisal from neural models predicts, but underestimates, human processing difficulty from syntactic ambiguities. arXiv preprint arXiv:2210.12187.
Christoph Aurnhammer and Stefan L Frank. 2019.
Comparing gated and simple recurrent neural network architectures as models of human sentence processing.
Annette Baumgaertner, Cornelius Weiller, and Christian Büchel. 2002. Event-related fmri reveals cortical sites involved in contextual sentence integration.
Neuroimage, 16(3):736–745.
Damián E Blasi, Joseph Henrich, Evangelia Adamou, David Kemmerer, and Asifa Majid. 2022. Overreliance on english hinders cognitive science. *Trends* in cognitive sciences.
Marisa Ferrara Boston, John Hale, Reinhold Kliegl, Umesh Patil, and Shravan Vasishth. 2008. Parsing costs as predictors of reading difficulty: An evaluation using the potsdam sentence corpus. Journal of Eye Movement Research, 2(1).
Trevor Brothers and Gina Kuperberg. 2020. Word predictability effects are linear, not logarithmic: Implications for probabilistic models of sentence comprehension. *Journal of Memory and Language*, 116.
Harm Brouwer, Francesca Delogu, Noortje J Venhuizen, and Matthew W Crocker. 2021. Neurobehavioral correlates of surprisal in language comprehension: A neurocomputational model. *Frontiers in Psychology*,
12:615538.
Harm Brouwer, Hartmut Fitz, and John Hoeks. 2010.
Modeling the noun phrase versus sentence coordination ambiguity in dutch: Evidence from surprisal theory. In *Proceedings of the 2010 workshop on cognitive modeling and computational linguistics*, pages 72–80.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Max Coltheart, Kathleen Rastle, Conrad Perry, Robyn Langdon, and Johannes Ziegler. 2001. Drc: a dual route cascaded model of visual word recognition and reading aloud. *Psychological review*, 108(1):204.
Andrea Gregor de Varda and Marco Marelli. 2022. The effects of surprisal across languages: Results from native and non-native reading. In *Findings of the* Association for Computational Linguistics: AACLIJCNLP 2022, pages 138–144.
Katherine A DeLong, Thomas P Urbach, and Marta Kutas. 2005. Probabilistic word pre-activation during language comprehension inferred from electrical brain activity. *Nature neuroscience*, 8(8):1117–1121.
Vera Demberg and Frank Keller. 2008. Data from eyetracking corpora as evidence for theories of syntactic processing complexity. *Cognition*, 109(2):193–210.
Joseph Dien, Michael S Franklin, Charles A Michelson, Lisa C Lemen, Christy L Adams, and Kent A
Kiehl. 2008. fmri characterization of the language formulation area. *Brain Research*, 1229:179–192.
Susan F Ehrlich and Keith Rayner. 1981. Contextual effects on word perception and eye movements during reading. *Journal of verbal learning and verbal* behavior, 20(6):641–655.
Xi Fan and Ronan Reilly. 2020. Reading development at the text level: an investigation of surprisal and embeddingbased text similarity effects on eyemovements in chinese early readers. *Journal of Eye Movement* Research, 13(6).
Irene Fernandez Monsalve, Stefan Frank, and Gabriella Vigliocco. 2012. Lexical surprisal as a general predictor of reading time. In *Proceedings of the 13th* Conference of the European Chapter of the Association for Computational Linguistics, pages 398–408, Avignon, France. Association for Computational Linguistics.
Stefan Frank and Rens Bod. 2011. Insensitivity of the human sentence-processing system to hierarchical structure. *Psychological science*, 22(6):829–834.
Stefan Frank and John CJ Hoeks. 2019. The interaction between structure and meaning in sentence comprehension. recurrent neural networks and reading times.
Stefan Frank, Leun J Otten, Giulia Galli, and Gabriella Vigliocco. 2015. The erp response to the amount of information conveyed by words in sentences. Brain and language, 140:1–11.
Stefan Frank and Robin Thompson. 2012. Early effects of word surprisal on pupil size during reading. In Proceedings of the annual meeting of the cognitive science society, volume 34.
Adam Goodkind and Klinton Bicknell. 2018. Predictive power of word surprisal for reading times is a linear function of language model quality. In Proceedings of the 8th workshop on cognitive modeling and computational linguistics (CMCL 2018), pages 10–18.
Michael Hahn, Richard Futrell, Roger Levy, and Edward Gibson. 2022. A resource-rational model of human processing of recursive linguistic structure.
Proceedings of the National Academy of Sciences, 119(43):e2122602119.
Yiding Hao, Simon Mendelsohn, Rachel Sterneck, Randi Martinez, and Robert Frank. 2020. Probabilistic predictions of people perusing: Evaluating metrics of language model performance for psycholinguistic modeling. *arXiv preprint arXiv:2009.03954*.
Danny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. 2021. Scaling laws for transfer.
Markus J Hofmann, Steffen Remus, Chris Biemann, Ralph Radach, and Lars Kuchinke. 2022. Language models explain word reading times better than empirical predictability. *Frontiers in Artificial Intelligence*,
4:214.
Jacob Louis Hoover, Morgan Sonderegger, Steven T
Piantadosi, and Timothy J O'Donnell. 2022. The plausibility of sampling as an algorithmic theory of sentence processing.
Falk Huettig. 2015. Four central questions about prediction in language processing. *Brain research*,
1626:118–135.
Albrecht Werner Inhoff and Ralph Radach. 1998. Definition and computation of oculomotor measures in the study of cognitive processes. Eye guidance in reading and scene perception, pages 29–53.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP
world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B.
Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models.
Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, and Kentaro Inui. 2021.
Lower perplexity is not always human-like. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5203–
5217.
Nayoung Kwon, Patrick Sturt, and Pan Liu. 2017. Predicting semantic features in chinese: Evidence from erps. *Cognition*, 166:433–446.
Roger Levy. 2008. Expectation-based syntactic comprehension. *Cognition*, 106(3):1126–1177.
Stephanie Lin, Jacob Hilton, and Owain Evans. 2022.
Truthfulqa: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 3214–3252.
Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, et al. 2021.
Few-shot learning with multilingual language models. arXiv preprint arXiv:2112.10668.
Danny Merkx and Stefan L Frank. 2021. Human sentence processing: Recurrence or attention? In *Proceedings of the Workshop on Cognitive Modeling and* Computational Linguistics, pages 12–22.
Enrique Meseguer, Manuel Carreiras, and Charles Clifton. 2002. Overt reanalysis strategies and eye movements during the reading of mild garden path sentences. *Memory & cognition*, 30(4):551–561.
James A Michaelov, Seana Coulson, and Benjamin K
Bergen. 2022. So cloze yet so far: N400 amplitude is better predicted by distributional information than human predictability judgements. *IEEE Transactions* on Cognitive and Developmental Systems.
Byung-Doh Oh, Christian Clark, and William Schuler.
2022. Comparison of structural parsers and neural language models as surprisal estimators. Frontiers in Artificial Intelligence, 5.
Byung-Doh Oh and William Schuler. 2022. Why does surprisal from larger transformer-based language models provide a poorer fit to human reading times?
arXiv preprint arXiv:2212.12131.
David C Plaut, James L McClelland, Mark S Seidenberg, and Karalyn Patterson. 1996. Understanding normal and impaired word reading: Computational principles in quasi-regular domains. In *Connectionist psychology: A text with readings*, pages 367–454.
Psychology Press.
Ralph Radach and Alan Kennedy. 2013. Eye movements in reading: Some theoretical context. *Quarterly Journal of Experimental Psychology*, 66(3):429–
452.
Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. *Psychological bulletin*, 124(3):372.
Cory Shain, Clara Meister, Tiago Pimentel, Ryan Cotterell, and Roger Philip Levy. 2022. Large-scale evidence for logarithmic effects of word predictability on reading time.
Noam Siegelman, Sascha Schroeder, Cengiz Acartürk, Hee-Don Ahn, Svetlana Alexeeva, Simona Amenta, Raymond Bertram, Rolando Bonandrini, Marc Brysbaert, Daria Chernova, et al. 2022. Expanding horizons of cross-linguistic research on reading: The multilingual eye-movement corpus (meco). *Behavior research methods*, pages 1–21.
Nathaniel J Smith and Roger Levy. 2008. Optimal processing times in reading: a formal model and empirical investigation. In *Proceedings of the Annual* Meeting of the Cognitive Science Society, volume 30.
Nathaniel J Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic.
Cognition, 128(3):302–319.
Adrian Staub. 2015. The effect of lexical predictability on eye movements in reading: Critical review and theoretical interpretation. *Language and Linguistics* Compass, 9(8):311–327.
Yuta Takahashi, Yohei Oseki, Hiromu Sakai, Michiru Makuuchi, and Rieko Osu. 2021. Identifying brain regions related to word prediction during listening to japanese speech by combining a lstm language model and meg. *bioRxiv*.
Jos JA Van Berkum, Colin M Brown, Pienie Zwitserlood, Valesca Kooijman, and Peter Hagoort. 2005.
Anticipating upcoming words in discourse: evidence from erps and reading times. *Journal of Experimental Psychology: Learning, Memory, and Cognition*,
31(3):443.
Marten Van Schijndel and Tal Linzen. 2021. Singlestage prediction models do not explain the magnitude of syntactic disambiguation difficulty. *Cognitive science*, 45(6):e12988.
Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger Levy. 2020. On the predictive power of neural language models for human real-time comprehension behavior. arXiv preprint arXiv:2006.01912.
Cai Wingfield and Louise Connell. 2022. Understanding the role of linguistic distributional knowledge in cognition. *Language, Cognition and Neuroscience*,
pages 1–51.
## A Effects Of Surprisal By Language And Model Type
We report in Table 1 the regression coefficients of surprisal (as computed on the target word wi), the t statistic and the associated p-value, divided by language, number of parameters, and fixation measure considered. The surprisal estimates obtained from the four XGLM models were statistically significant predictors of processing times in all the language × model combinations when considering GD and TT, and in the vast majority of the cases when considering FF as the dependent variable.
These result are overall more solid than the ones obtained by de Varda and Marelli (2022), who did not report significant partial effects of surprisal on FF and GD in some of the languages considered.
The authors derived their probabilistic estimates employing mBERT, a bidirectional encoder. This finding highlights the importance of employing standard left-to-right causal language models when studying the effects of predictability on incremental sentence processing.
## B Relationship Between Perplexity And ∆**Loglik**
The perplexity of a model (Eq. 1) is commonly considered as an intrinsic measure of a language model's linguistic accuracy. The employment of perplexity as an evaluation of a multilingual language model is not free of concerns (see §4), but for completeness and consistency with the literature we also report the relationship between perplexity and ∆LogLik.
$$e x p\left[-{\frac{1}{N}}\sum_{i=1}^{N}\log P(w_{i}|w_{1\ldots i-1})\right]\qquad{\mathrm{(1)}}$$
We analyzed the relationship between perplexity and ∆LogLik by fitting three generalized additive mixed models (GAMMs; one for each eye-tracking measure considered), with random slopes and intercepts for language. Note that the presence of by-language random effects mitigates the problem of comparing perplexity values with potentially different employed vocabularies.
The results are graphically depicted in Figure 2.
In the case of FF (2a), we found a significant relationship between perplexity and ∆LogLik (EDF =
6.093, F = 3.623, p = 0.0095), which appears to be positive and (near)-linear from graphical inspection. In the case of GD (2b), we still found a significant partial effect of perplexity (EDF = 6.760, F = 4.466, p = 0.0019); however, the functional form of this relationship is far from linearity in this case, and is characterized by an initial growth in ∆LogLik with increasing perplexity, a local plateau, and an inversion of the trend in the 400-550 perplexity range. There is then a second inversion of the trend in the 500-600 perplexity range, although with high partial residuals. In the case of TT (2c), the relationship is clearly quadratic from graphical inspection, although the partial effect of perplexity is not statistically significant (EDF = 2.016, F = 2.152, p =
0.123).
Taken together, these results corroborate our observation that there is a negative relationship between the linguistic and the psychological accuracy of a model when considering the earliest fixation measurement, namely FF (§5); this relationship is less clear-cut when considering GD, and nonsignificant when considering TT. The very absence of a significant relationship between perplexity and
∆LogLik in this latter case demonstrates that the finding that smaller models outperform their overparametrized counterparts in cognitive modelling critically depends on the computational complexity of the mental processes being analyzed.
## C Cross-Lingual Variation In Later Measurements
The cross-lingual variation of our results increased with gaze duration and total reading time, in particular when considering XGLM564M; our tentative explanation for this pattern is motivated by the fact that late eye-tracking measures subsume the early ones (FF < GD < TT). XGLM564M is very effective at capturing early eye movement measurements
(Figure 1a); some of the later measures are de facto equivalent to the earlier ones in some cases (e.g., if a word is only fixated once, FF, GD, and TT
will have the same value). XGLM564M might be more effective in modelling late eye tracking data in languages where these cases are more common, and less effective in languages where it is more common to refixate. This hypothesis relies on the observation in the MECO paper that refixations are more common in some languages than others (e.g.,
Estonian, see Siegelman et al., 2022).
![8_image_0.png](8_image_0.png)
First fixation duration Gaze duration Total reading time
Language Family θ Estimate t p Estimate t p Estimate t p
Finnish Uralic 564M 0.0147 1.5527 0.1207 0.1118 12.5044 3.87 ·10−34 0.1567 16.8540 2.35 ·10−58
Greek Indoeuropean 564M 0.0237 3.2279 0.0013 0.0777 9.8810 1.66 ·10−22 0.1147 15.2416 1.11 ·10−49
Korean Koreanic 564M 0.0371 4.4060 1.14 ·10−05 0.0817 8.3948 1.22 ·10−16 0.1171 10.6387 1.52 ·10−25 Russian Indoeuropean 564M 0.0300 3.7423 0.0002 0.0879 11.0399 2.00 ·10−27 0.1342 16.1478 7.37 ·10−55 Turkish Turkic 564M 0.0126 1.3442 0.1791 0.0876 10.3490 2.35 ·10−24 0.1290 13.2625 2.87 ·10−38
English Indoeuropean 564M 0.0248 3.6112 0.0003 0.0661 8.9182 1.01 ·10−18 0.0885 11.1290 5.16 ·10−28
Spanish Indoeuropean 564M 0.0131 2.0597 0.0396 0.0554 8.5928 1.57 ·10−17 0.0713 9.9527 6.85 ·10−23 Estonian Uralic 564M 0.0285 3.5439 0.0004 0.1437 17.0570 7.57 ·10−60 0.1764 20.9928 3.12 ·10−86 Italian Indoeuropean 564M 0.0272 3.7723 0.0002 0.0987 13.0335 2.37 ·10−37 0.1108 13.8504 8.62 ·10−42
German Indoeuropean 564M 0.0238 2.7832 0.0054 0.0954 10.4169 8.55 ·10−25 0.1361 15.3138 3.39 ·10−50 Finnish Uralic 2.9B 0.0083 0.9303 0.3524 0.1073 12.7519 2.27 ·10−35 0.1530 17.5951 5.65 ·10−63
Greek Indoeuropean 2.9B 0.0207 2.9912 0.0028 0.0744 10.0489 3.32 ·10−23 0.1037 14.5768 8.41 ·10−46 Korean Koreanic 2.9B 0.0378 4.6397 3.83 ·10−06 0.0780 8.2755 3.23 ·10−16 0.1112 10.4415 1.11 ·10−24 Russian Indoeuropean 2.9B 0.0209 2.7227 0.0065 0.0816 10.6812 7.95 ·10−26 0.1313 16.5508 2.49 ·10−57
Turkish Turkic 2.9B 0.0096 1.0695 0.2850 0.0903 11.1858 4.66 ·10−28 0.1350 14.6822 4.57 ·10−46
English Indoeuropean 2.9B 0.0180 2.7426 0.0062 0.0593 8.3905 8.81 ·10−17 0.0800 10.5037 3.37 ·10−25
Spanish Indoeuropean 2.9B 0.0100 1.6592 0.0972 0.0474 7.7710 1.17 ·10−14 0.0600 8.8493 1.67 ·10−18 Estonian Uralic 2.9B 0.0160 2.0700 0.0386 0.1397 17.2667 3.91 ·10−61 0.1695 20.9452 7.97 ·10−86
Italian Indoeuropean 2.9B 0.0217 3.1766 0.0015 0.0852 11.7870 4.51 ·10−31 0.0981 12.8402 2.24 ·10−36 German Indoeuropean 2.9B 0.0188 2.4136 0.0159 0.0849 10.1956 7.62 ·10−24 0.1278 15.9417 5.10 ·10−54
Finnish Uralic 1.7B 0.0166 1.8208 0.0689 0.1079 12.5299 2.91 ·10−34 0.1511 16.8523 2.44 ·10−58
Greek Indoeuropean 1.7B 0.0188 2.6711 0.0076 0.0694 9.1802 1.05 ·10−19 0.1015 13.9720 2.18 ·10−42
Korean Koreanic 1.7B 0.0361 4.3679 1.35 ·10−05 0.0804 8.4397 8.62 ·10−17 0.1115 10.3313 3.21 ·10−24
Russian Indoeuropean 1.7B 0.0280 3.6187 0.0003 0.0835 10.8437 1.52 ·10−26 0.1332 16.6544 5.51 ·10−58
Turkish Turkic 1.7B 0.0160 1.7717 0.0766 0.0927 11.4122 4.25 ·10−29 0.1390 15.0582 3.17 ·10−48
English Indoeuropean 1.7B 0.0218 3.2646 0.0011 0.0614 8.5422 2.50 ·10−17 0.0851 11.0218 1.62 ·10−27 Spanish Indoeuropean 1.7B 0.0117 1.9096 0.0563 0.0500 8.0723 1.11 ·10−15 0.0650 9.4580 7.24 ·10−21 Estonian Uralic 1.7B 0.0213 2.7109 0.0068 0.1427 17.4455 2.85 ·10−62 0.1757 21.5929 1.89 ·10−90
Italian Indoeuropean 1.7B 0.0225 3.2373 0.0012 0.0905 12.3422 8.42 ·10−34 0.0990 12.7184 9.68 ·10−36
German Indoeuropean 1.7B 0.0226 2.8348 0.0046 0.0914 10.7334 3.47 ·10−26 0.1328 16.1677 2.01 ·10−55 Finnish Uralic 4.5B 0.0065 0.7063 0.4801 0.1082 12.3959 1.33 ·10−33 0.1539 16.9725 4.49 ·10−59 Greek Indoeuropean 4.5B 0.0140 2.0330 0.0422 0.0671 9.0501 3.32 ·10−19 0.0979 13.7244 4.97 ·10−41
Korean Koreanic 4.5B 0.0345 4.1755 3.16 ·10−05 0.0845 8.8906 2.05 ·10−18 0.1201 11.2087 4.63 ·10−28 Russian Indoeuropean 4.5B 0.0189 2.4883 0.0129 0.0742 9.7792 5.17 ·10−22 0.1206 15.1986 3.92 ·10−49
Turkish Turkic 4.5B 0.0088 0.9803 0.3271 0.0876 10.8771 1.14 ·10−26 0.1290 14.0001 2.96 ·10−42 English Indoeuropean 4.5B 0.0234 3.5854 0.0003 0.0591 8.3770 9.82 ·10−17 0.0790 10.3938 1.01 ·10−24 Spanish Indoeuropean 4.5B 0.0092 1.5360 0.1247 0.0446 7.3498 2.75 ·10−13 0.0569 8.4442 5.21 ·10−17
Estonian Uralic 4.5B 0.0232 2.9332 0.0034 0.1446 17.5013 1.14 ·10−62 0.1787 21.8217 3.29 ·10−92
Italian Indoeuropean 4.5B 0.0227 3.3715 0.0008 0.0800 11.1795 3.31 ·10−28 0.0922 12.1985 4.12 ·10−33
German Indoeuropean 4.5B 0.0126 1.6572 0.0976 0.0791 9.6844 1.02 ·10−21 0.1210 15.3631 1.70 ·10−50
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section "Limitations" (unnumbered, page 5)
✗ A2. Did you discuss any potential risks of your work?
There are no reasonable risks in our work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract (unnumbered), Introduction (§1), Related work and motivation (§2), Aims (§3)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** §4.1, §4.2
✓ B1. Did you cite the creators of artifacts you used?
§4.1, §4.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No, due to space restrictions. However, the artifacts that we employed were publicly released for research purposes.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No; however, the artifacts were employed in accordance with their intended use.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No, but the authors of the artifacts did, and we provided a reference to the original article.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? §4.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. §4.3
## C ✓ **Did You Run Computational Experiments?** §4, §5
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We did report the number of parameters (§4.2) but not the computational budget or the computing infrastructure as we did not train the models ourselves.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
§5
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We used the defaults parameters of the transformers library.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
rai-etal-2023-improving | Improving Generalization in Language Model-based Text-to-{SQL} Semantic Parsing: Two Simple Semantic Boundary-based Techniques | https://aclanthology.org/2023.acl-short.15 | Compositional and domain generalization present significant challenges in semantic parsing, even for state-of-the-art semantic parsers based on pre-trained language models (LMs). In this study, we empirically investigate improving an LM{'}s generalization in semantic parsing with two simple techniques: at the token level, we introduce a token preprocessing method to preserve the semantic boundaries of tokens produced by LM tokenizers; at the sequence level, we propose to use special tokens to mark the boundaries of components aligned between input and output. Our experimental results on two text-to-SQL semantic parsing datasets show that our token preprocessing, although simple, can substantially improve the LM performance on both types of generalization, and our component boundary marking method is particularly helpful for compositional generalization. | # Improving Generalization In Language Model-Based Text-To-Sql Semantic Parsing: Two Simple Semantic Boundary-Based Techniques
Daking Rai1, Bailin Wang2, Yilun Zhou2**, Ziyu Yao**1 1George Mason University, 2MIT
1{drai2, ziyuyao}@gmu.edu, 2{bailinw, yilun}@mit.edu
## Abstract
Compositional and domain generalization present significant challenges in semantic parsing, even for state-of-the-art semantic parsers based on pre-trained language models (LMs).
In this study, we empirically investigate improving an LM's generalization in semantic parsing with two simple techniques: at the *token* level, we introduce a token preprocessing method to preserve the semantic boundaries of tokens produced by LM tokenizers; at the sequence level, we propose to use special tokens to mark the boundaries of components aligned between input and output. Our experimental results on two text-to-SQL semantic parsing datasets show that our token preprocessing, although simple, can substantially improve the LM performance on both types of generalization, and our component boundary marking method is particularly helpful for compositional generalization.1
## 1 Introduction
Pre-trained language models (LMs)2such as T5
(Raffel et al., 2020) have now been more and more widely adopted for semantic parsing due to their promising performance and straightforward architectures (Shaw et al., 2021; Scholak et al., 2021; Yin et al., 2021; Qi et al., 2022; Xie et al., 2022; Qiu et al., 2021). However, recent work revealed that these LMs still struggle to generalize on outof-distribution (OOD) samples (Lake and Baroni, 2018; Keysers et al., 2019; Shaw et al., 2021; Qiu et al., 2022b). For example, if a parser has learned
"how many heads are in the department" and "how many people are older than 56", it is expected to generalize to "how many heads of the departments 1The source code for our implementation is available at https://github.com/Dakingrai/ood-generalizatio n-semantic-boundary-techniques.
2We use "LMs" to refer to a broad set of models that are pre-trained in (masked/autoregressive) language modeling objectives, with encoder-decoder or decoder-only architecture.
![0_image_0.png](0_image_0.png)
![0_image_1.png](0_image_1.png)
SQL output)
Before: How many heads of the departments are older than 56 ?
select count (head.*) where head.age > 56 After: [sep0] How many heads of the departments
[/sep0] [sep1] are older than 56 ? [/sep1] [sep0] select count (head.*) [/sep0] [sep1]
where head.age > 56 [/sep1]
Table 1: Our proposed techniques. Top: we preprocess the text such that its T5 tokenization aligns with word semantics. Coloring indicates tokenization; for example,
"avg" is converted into three tokens of "a", "v" and "g".
Bottom: we add separator tokens to mark the boundaries of aligned semantic components in the input and output.
are older than 56". Generalizing to such novel component compositions is known as *compositional* generalization. Additionally, generalizing to new domains (e.g., from "entertainment" to "flight") is referred to as *domain generalization*.
In this paper, we investigate these two types of generalization of LMs in text-to-SQL semantic parsing, i.e., given a natural language (NL) input and the database schema, producing a SQL
query that can be executed against the database for desired output. We conduct experiments using the cross-database Spider benchmark (Yu et al.,
2018b) and its derivation Spider-CG (Gan et al.,
2022). Compared with existing benchmarks (Keysers et al., 2019; Lake and Baroni, 2018), this task setting is both more realistic (e.g., containing larger language variations) and more challenging (e.g., requiring grounding to the database context).
Although previous work tackling the two types of generalization all requires non-trivial engineering effort (see Section 2), in this work, we present two simple yet effective techniques, which are extremely easy to implement with LMs (Table 1).
Our techniques improve the generalization of LMs by preserving the *semantic boundaries* at the token and the sequence levels. At the token level, our first technique rewrites the inputs to handle naming conventions in database schemas and SQL queries such that a pre-trained LM tokenizer can split them into semantically meaningful tokens. At the sequence level, our second technique introduces special tokens to mark the semantic boundaries (e.g.,
phrases) aligned between the source NL and the target SQL. These special tokens implicitly help the LM-based parser build more precise input-output correspondences that are crucial for compositional generalization.
On five evaluation sets, the experimental results based on T5-base show that, albeit simple, our token-level technique dramatically improves both types of LM generalization, and our sequence-level technique is particularly helpful for compositional generalization. Combining them together leads to further improvements. Our additional experiments further demonstrate the generalizability of our approaches (e.g., to text-to-LISP expression parsing
(Semantic Machines et al., 2020)).
## 2 Related Work
Text-to-SQL Semantic Parsing. This task has received considerate attention since the creation of the WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018b) datasets. While a large amount of existing work designed specialized architectures for this task (Yu et al., 2018a; Zhang et al., 2019; Wang et al., 2020; Lin et al., 2020), there has been a trend of directly fine-tuning pre-trained sequenceto-sequence models as semantic parsers (Shaw et al., 2021; Scholak et al., 2021; Xie et al., 2022; Qi et al., 2022). Our work follows the same line and proposed approaches to further improve the LM
performance. On the other hand, Guo et al. (2019); Gan et al. (2021); Herzig et al. (2021) showed that simplifying the SQL representation in a way that the new representation can semantically better align with the NL can dramatically improve the parsing performance. In our work, we follow the NatSQL
representation (Gan et al., 2021) as it has better alignments with the NL.
Injecting Priors into Semantic Parsers. Our two techniques can be viewed as injecting human prior knowledge into neural models for better generalization, which has been one of the major research efforts on improving domain and compositional generalization. The key consideration to be taken when injecting priors is the trade-off between the form and the generalizability. Strong priors in the form of specialized model architectures (Shaw et al., 2021; Herzig and Berant, 2021; Wang et al.,
2021) are either too expensive or not applicable across domains. Weaker priors in terms of specialized training algorithms (Yin et al., 2021; Conklin et al., 2021) are more general, but often weaker in performance compared to other lines of methods.
Our work is in the spirit of the third line on the use of data augmentation (Andreas, 2020; Akyürek et al., 2020; Qiu et al., 2022a). However, instead of synthesizing new data from scratch, we "annotate" the data with semantic boundary markers, which is not only much simpler but also brings better performance. The final line of work (Qiu et al., 2022b; Levy et al., 2022) is based on the learning capacities in the context of large LMs, which is out of the scope of this work.
## 3 Methods 3.1 Token Preprocessing
| Before preprocessing | After preprocessing |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|
| Snake case in schema items (add space) booking_status_code booking _ status _ code document_type document _ type Dot notation in column references (add space) farm.cows farm . cows origin.flight origin . flight SQL keyword (expand spelling) avg average desc descending | |
We present our two techniques for improving the generalization of LM-based semantic parsers.
LM pre-training learns high-quality contextualized word representation (Devlin et al., 2019), but to effectively use it on a downstream task, the tokenization needs to "make sense." For example, if the text
"pet_age" is tokenized as "pet", "_" and "age", then the semantics of "pet" and "age" acquired during pretraining can be directly used. However, if it is
| Dataset | Size | Usage | Generalization Type |
|-----------|--------|---------|------------------------|
| SpiderT | 7,000 | Train | None (in-distribution) |
| SpiderD | 1,034 | Eval | Domain |
| CG-SUBT | 20,686 | Eval | None (in-distribution) |
| CG-SUBD | 2,883 | Eval | Domain |
| CG-APPT | 18,793 | Eval | Composition |
| CG-APPD | 3,237 | Eval | Domain & Composition |
tokenized as "pe", "t_a" and "ge", then pre-training is hardly useful because the model does not even recognize the two semantic words.
Unfortunately, this latter case is very common when tokenizing non-natural language texts, such as database schemas and SQL queries. Thus, we propose a token preprocessing method to induce more natural tokenization by, at a high level, adding white spaces and handling the naming conventions in database schema and SQL queries. We show examples in Table 2 and details in Appendix A.
## 3.2 Component Boundary Marking
At the sequence level, our second technique further assists LMs in recognizing the semantic boundaries of components aligned between input and output.
An example is shown in Table 1. While prior work has attempted the goal via implementing alignmentbased attention supervision (Yin et al., 2021), we propose to insert *special tokens* in input and output to inject such bias. Specifically, we use pairs of "[sepN]" and "[/sepN]", N ∈ Z, to mark the boundaries, so as to hint the LM that components within the paired special tokens should be aligned. In practice, we also observed cases where an NL component has to be aligned with a SQL
component consisting of multiple non-continuous segments. To handle it, we will apply the same pair of special tokens to each segment of the same component. An example is shown in Table 8 in the Appendix.
Finally, we note that our method assumes the availability of component annotations. Such annotations can be obtained via human labeling (Gan et al., 2021), heuristic rules (Yin et al., 2021), or other advanced machine learning algorithms, but this is beyond the scope of our work.
## 4 Experiments 4.1 Setup
Datasets. We use two datasets, Spider (Yu et al.,
2018b) and Spider-CG (Gan et al., 2022). Spider consists of a training set (SpiderT ) and a development set (SpiderD) with non-overlapping domains but otherwise similar data characteristics (e.g.,
length). Thus, we train the models on SpiderT , and consider SpiderD as the evaluation for domain generalization. Spider-CG is derived from Spider by first dissecting each Spider instance into different components according to its dependency parse and generates data in two ways: substituting a component in one instance with one from another instance and appending one component from one instance to another instance. Depending on whether the instances come from the Spider training or development set, we get four splits: CG-SUBT , CGSUBD, CG-APPT and CG-APPD, all of which are only used for evaluation. The instances created under substitution share similar data characteristics while those under appending are much longer, so a good model performance on the latter requires compositional generalization. Table 3 summarizes the dataset information. In addition, we use the NatSQL representation (Gan et al., 2021) throughout the experiment due to its better alignment with the NL input.
Evaluation Metrics. We follow the standard Spider benchmarking and employ two evaluation metrics. **Exact Match (EM)** compares the generated and the ground-truth query by performing exact set matching at the lexical level (Yu et al., 2018b).
Execution Match (EX) measures whether executing the generated query on the given database can yield the same results as using the ground truth.
Notably, for a fair comparison with existing semantic parsers on the Spider leader board, we follow Gan et al. (2022), convert each generated NatSQL
query into a SQL query, and report the evaluation results based on the converted SQL query.
Models, Baselines, and Implementation. We evaluate our proposed techniques by applying them to the pre-trained T5 model (Raffel et al., 2020).
Our experiments are conducted using T5-base, with the use of database contents following Lin et al.
(2020). As our second technique leverages component boundary labels to encourage the compositional generalization of LM, we compare it with a baseline (Yin et al., 2021) which similarly assumes the labels but utilizes them in a more complicated way, i.e., transforming the component alignments into supervision on the cross attention between input and output of the LM. We denote this base-
| Model | SpiderD | CG-SUBT | CG-SUBD | CG-APPT | CG-APPD | | | | | |
|----------------------------------------------------------------------------------------------------|-----------|-----------|-----------|-----------|-----------|------|------|------|------|------|
| EM | EX | EM | EX | EM | EX | EM | EX | EM | EX | |
| Semantic Parsers with Specialized Architectures (Gan et al., 2022) RATSQLB(S) 71.9 - 91.0 - 72.6 - | 79.8 | - | 61.5 | - | | | | | | |
| RATSQLG(S) | 74.5 | - | 91.4 | - | 76.7 | - | 82.5 | - | 68.3 | - |
| Semantic Parsers based on LMs T5-base 64.6 | 67.9 | 83.8 | 88.1 | 69.1 | 71.1 | 60.2 | 70.3 | 45.0 | 54.9 | |
| T5-base + Tok | 71.8 | 75.6 | 85.9 | 89.5 | 74.1 | 78.6 | 65.2 | 73.8 | 54.2 | 65.9 |
| T5-base + Comp | 64.4 | 68.2 | 86.3 | 90.2 | 69.3 | 73.1 | 69.8 | 77.9 | 53.5 | 63.4 |
| T5-base + Tok + Comp | 69.4 | 73.2 | 86.6 | 90.7 | 76.6 | 79.8 | 71.1 | 77.8 | 61.0 | 69.4 |
| T5-base + Tok + Attn. Sup | 69.4 | 73.7 | 83.6 | 87.7 | 71.7 | 75.6 | 62.3 | 70.8 | 56.3 | 66.2 |
line as **Attn. Sup**.
3 For both methods, we leverage component annotations from Spider-SS (Gan et al., 2022). These annotations were generated by applying a syntactic parser to decompose the NL
question into sub-questions and then manually annotating their corresponding NatSQL components.
We also compare with the state-of-the-art models, RATSQLB(S) and RATSQLG(S), from Gan et al.
(2022), although their models adopt a specialized architecture (i.e., RATSQL (Wang et al., 2020)) and RATSQLG(S) additionally employed task-specific pre-training (Shi et al., 2021). Both models used the same component annotations from Spider-SS.
Finally, for each of our model variants in Table 4, we repeat the experiment three times, using three random seeds consistently across all models, and report the average results. We include more implementation details in Appendix D.
## 4.2 Results
Main Results. We present our results in Table 4. First, all models obtain the best performance on the in-distribution evaluation set CG-SUBT while suffering from more than 10% performance drops on others, confirming the challenges of the domain and compositional generation. As expected, all models have the worst performance on CG-APPD,
which requires both types of generalization. Between the two types, it is also observed that compositional generalization (as measured by CG-APPT )
is more challenging than domain generalization (as measured by SpiderD and CG-SUBD).
Second, our results show that the token preprocessing method, albeit simple, can improve both domain and compositional generalizations of LMs dramatically. For example, comparing T5-base with T5-base+Tok, the latter is improved by around 5-7% EM and 7% EX for domain generalization
(on SpiderD and CG-SUBD), 5% EM and 3.5% EX
for compositional generalization (on CG-SUBT ),
and 9% EM and 11% EX for the challenging case when both types occur (on CG-APPD). Additionally, we also show the effectiveness of token preprocessing with T5-3B on SpiderD in App. B.
Moving on to our proposed component boundary marking method, it shows to be particularly helpful for compositional generalization. Specifically, applying it to T5-base leads to a 9% EM and 7%
EX increase on CG-APPT , and an 8% EM and 8% EX increase on CG-APPD. On the in-distribution evaluation set, this technique also gives slight improvement, whereas, for domain generalization, there is no obvious impact from this technique.
Finally, augmenting T5-base with both techniques (i.e., T5-base+Tok+Comp) leads to better performance than applying each technique individually in most evaluation sets, implying that our two techniques are complementary to each other. Specifically, for in-distribution evaluation, using each technique individually or both of them together yield similar results; for domain generalization, there is no additional gain from applying component boundary marking on the top of the token preprocessing; for compositional generalization, the two techniques together contribute the best EM across all models and baselines. Overall, combining the two techniques shrinks the performance gap between in-distribution and domain OOD by around 2-4% EM, composition OOD by 7%, and joint OOD by 13%.
Compared with Special Architectures. Despite its simplicity, our T5-base+Tok+Comp model achieves comparable or better performance than the two RATSQL variants on CG-SUBD. It also performs comparably to RATSQLB(S) on CG-APPD.
Compared with Attn. Sup. Surprisingly, the attention supervision has only led to around 2% EM
and 1.5% EX gains on CG-APPD, while no further advantage is observed on other evaluation sets. In our conjecture, this is due to the misalignment between the objective of Attn. Sup (Yin et al., 2021)
and the attention mechanism of pre-trained LMs.
Specifically, Attn. Sup encourages the attention distribution of different heads to be consistent with the component alignment supervision. However, prior work (Voita et al., 2019) suggests that different attention heads of even the same layer may have different functions and roles. Thus, when coarsely defining the objective function, it may not allow for the most effective supervision. Furthermore, similar to our finding, Yin et al. (2021) did not observe performance gain when they applied Attn. Sup to T5-base on CFQ (Keysers et al., 2020).
Qualitative Analysis on Tokenization. To qualitatively understand how our token preprocessing helps the generalization, we randomly sampled 50 examples from the SpiderD to analyze how frequently the T5 tokenizer divides tokens into less meaningful subtokens. Consequently, we found 243 tokenization issues in total, and 140 of them can be resolved by our token preprocessing. The remaining cases are like splitting "id" into "i" and
"d" as shown in Table 1, which is beyond our scope.
Error Analysis on Component Boundary Marking. We manually examined 50 error predictions from T5-base+Tok+Comp and contrasted them with the errors of T5-base+Tok. Intriguingly, we observed much more frequent schema items or value hallucinations from the former. For example, it may generate queries accessing non-existing columns in a table, or misspells the literal values in the queries. We conjecture that this is because our component boundaries are only applied to the NL input, not the database schema (note that literal values are grounded and attached to schema items
| Model | Exact Match |
|------------------------------------|---------------|
| COARSE2FINE + SS (Span-level Sup.) | 47.4 |
| T5-base | 63.9 |
| T5-base + Tok | 65.1 |
| T5-base + Tok + Comp | 67.7 |
in their input representations; see Appendix D for details). This reveals a new challenge of LM generalization in text-to-SQL semantic parsing, i.e.,
how to properly handle the database schema when injecting prior knowledge into LMs for compositional generalization.
## Generalizing To Other Semantic Parsing Tasks.
While our main focus in this work is on text-toSQL parsing, we also investigate whether our approaches can generalize beyond this specific task.
To this end, we implemented both of our techniques to SMCalFlow-CS (Yin et al., 2021), a compositional generalization dataset for text-to-LISP expression parsing (Semantic Machines et al., 2020).
For "+Comp", We utilize the span-level alignments heuristically derived by Yin et al. (2021) as component annotations.4 Our results in Table 5 show that: (1) Our token preprocessing can be universally helpful for LMs to model schema items, predicates, etc., leading to 1.2% performance gain over T5-base; (2) Our component boundary marking method is highly effective for compositional generalization, which offers 2.6% additional gain.
## 5 Conclusion
In this paper, we present two simple yet effective techniques to improve the domain and compositional generalization of LMs in text-to-SQL semantic parsing. Our techniques aid LMs in preserving the semantic boundaries of tokens and components in their input and output. We also demonstrate their potential to be generalized to other semantic parsing tasks.
## Limitations
Future work can further apply our approaches to other semantic parsing tasks. For example, for parsing texts to lambda-calculus expressions for knowledge base question answering (Dong and Lapata, 2016), one can similarly preprocess the schema items (e.g., "department_time" into
"department _ time") and typed values (e.g.,
"dallas:ci" into "dallas : ci") for more meaningful subword tokenization results. In addition, our experiments are based on T5. To further verify the effectiveness of our techniques, one can apply them to other pre-trained language models such as BART (Lewis et al., 2020) and GPT-2 (Radford et al., 2019) as well.
## Acknowledgments
We would like to thank all anonymous reviewers for their constructive comments. We also thank Yujian Gan and Xinyun Chen for their help in using the NatSQL and the Spider-SS datasets, as well as Pengcheng Yin for using the code base of Attn. Sup.
This project was supported by resources provided by the Office of Research Computing at George Mason University (https://orc.gmu.edu) and funded in part by grants from the National Science Foundation (Awards Number 1625039 and 2018631).
## References
Ekin Akyürek, Afra Feyza Akyürek, and Jacob Andreas. 2020. Learning to recombine and resample data for compositional generalization. *arXiv preprint* arXiv:2010.03706.
Jacob Andreas. 2020. Good-enough compositional data augmentation. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7556–7566, Online. Association for Computational Linguistics.
Henry Conklin, Bailin Wang, Kenny Smith, and Ivan Titov. 2021. Meta-learning to compositionally generalize. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3322–3335, Online. Association for Computational Linguistics.
DeepSpeed. 2023. https://github.com/microsoft/deepspeed.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In *Proceedings of the* 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33–43, Berlin, Germany. Association for Computational Linguistics.
Yujian Gan, Xinyun Chen, Qiuping Huang, and Matthew Purver. 2022. Measuring and improving compositional generalization in text-to-SQL via component alignment. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 831–
843, Seattle, United States. Association for Computational Linguistics.
Yujian Gan, Xinyun Chen, Jinxia Xie, Matthew Purver, John R. Woodward, John Drake, and Qiaofu Zhang.
2021. Natural SQL: Making SQL easier to infer from natural language specifications. In Findings of the Association for Computational Linguistics: EMNLP
2021, pages 2030–2042, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, JianGuang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross-domain database with intermediate representation. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524–4535, Florence, Italy. Association for Computational Linguistics.
Jonathan Herzig and Jonathan Berant. 2021. Spanbased semantic parsing for compositional generalization. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 908–921, Online. Association for Computational Linguistics.
Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, and Yuan Zhang. 2021. Unlocking compositional generalization in pre-trained models using intermediate representations. arXiv preprint arXiv:2104.07478.
Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measuring compositional generalization: A comprehensive method on realistic data. In International Conference on Learning Representations.
Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin,
Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, et al. 2019. Measuring compositional generalization: A comprehensive method on realistic data. *arXiv preprint arXiv:1912.09713*.
Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In *Proceedings of the 35th International Conference on* Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, pages 2873–2882. PMLR.
Itay Levy, Ben Bogin, and Jonathan Berant. 2022.
Diverse demonstrations improve in-context compositional generalization. arXiv preprint arXiv:2212.06800.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Xi Victoria Lin, Richard Socher, and Caiming Xiong.
2020. Bridging textual and tabular data for crossdomain text-to-SQL semantic parsing. In Findings of the Association for Computational Linguistics:
EMNLP 2020, pages 4870–4888, Online. Association for Computational Linguistics.
Jiexing Qi, Jingyao Tang, Ziwei He, Xiangpeng Wan, Chenghu Zhou, Xinbing Wang, Quanshi Zhang, and Zhouhan Lin. 2022. Rasat: Integrating relational structures into pretrained seq2seq model for text-tosql. *arXiv preprint arXiv:2205.06983*.
Linlu Qiu, Peter Shaw, Panupong Pasupat, Pawel Nowak, Tal Linzen, Fei Sha, and Kristina Toutanova.
2022a. Improving compositional generalization with latent structure and data augmentation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4341–4362, Seattle, United States. Association for Computational Linguistics.
Linlu Qiu, Peter Shaw, Panupong Pasupat, Paweł Krzysztof Nowak, Tal Linzen, Fei Sha, and Kristina Toutanova. 2021. Improving compositional generalization with latent structure and data augmentation. *arXiv preprint arXiv:2112.07610*.
Linlu Qiu, Peter Shaw, Panupong Pasupat, Tianze Shi, Jonathan Herzig, Emily Pitler, Fei Sha, and Kristina Toutanova. 2022b. Evaluating the impact of model scale for compositional generalization in semantic parsing. *arXiv preprint arXiv:2205.12253*.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Torsten Scholak, Nathan Schucher, and Dzmitry Bahdanau. 2021. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 9895–9901, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Semantic Machines, Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H. Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, and Alexander Zotov. 2020. Task-oriented dialogue as dataflow synthesis. *Transactions of the Association for Computational Linguistics*, 8:556–571.
Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. 2021. Compositional generalization and natural language variation: Can a semantic parsing approach handle both? In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 922–938, Online. Association for Computational Linguistics.
Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos, and Bing Xiang. 2021. Learning contextual representations for semantic parsing with generation-augmented pre-training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13806–13814.
Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy.
Association for Computational Linguistics.
Bailin Wang, Mirella Lapata, and Ivan Titov. 2021.
Structured reordering for modeling latent alignments in sequence transduction. *Advances in Neural Information Processing Systems*, 34:13378–13391.
Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL:
Relation-aware schema encoding and linking for textto-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7567–7578, Online. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. *arXiv preprint arXiv:2201.05966*.
Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, and Jacob Andreas. 2021. Compositional generalization for neural semantic parsing via spanlevel supervised attention. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2810–2823, Online.
Association for Computational Linguistics.
Tao Yu, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li, and Dragomir Radev. 2018a.
SyntaxSQLNet: Syntax tree networks for complex and cross-domain text-to-SQL task. In *Proceedings* of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1653–1663, Brussels, Belgium. Association for Computational Linguistics.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018b. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics.
Rui Zhang, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019. Editingbased SQL query generation for cross-domain context-dependent questions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 5338–5349, Hong Kong, China. Association for Computational Linguistics.
Victor Zhong, Caiming Xiong, and Richard Socher.
2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.
## A Token Preprocessing Details
We propose a simple token preprocessing method.
Instead of directly feeding the input to the subword tokenizer, we introduce three preprocessing steps:
(1) For schema items in input and output, reversing the snake case to the normal, e.g., "pet_age" to
"pet _ age"; (2) For any call of "Table.Column",
splitting the tokens around the access operator "."
(i.e., "Table . Column"); and (3) Replacing any reserved words that cannot be properly handled in NatSQL, e.g., "avg" to "average". In practice, we also handle formalism-specific special tokens, e.g., adding the "less than" operator "<" to the vocabulary of T5 tokenizer. While we showcase our token preprocessing under text-to-SQL parsing, the intuition can be generalized to other formalisms
(e.g., regex, λ-expression) easily.
In addition, we also check the issue of tokenization in other popular LM tokenizers and found that the tokenization issue is not specific to T5. Examples of bad tokenization from BERT (Devlin et al.,
2019) and GPT2 (Radford et al., 2019) tokenizers and after our token preprocessing are listed in Table 6.
GPT2 Tokenizer
![7_image_1.png](7_image_1.png)
![7_image_0.png](7_image_0.png) Before: avg After: average BERT Tokenizer Before: avg After: average Before: asc After: ascending
Table 6: Tokenization of snake case, camel case, and token notation in BERT and GPT2 tokenizer. Coloring indicates tokenization, same as Table 1.
## B T5-3B Experiment
To assess the effectiveness of our token preprocessing technique with larger LMs, we apply it to T53B and evaluate the model on SpiderD. The results
| Model | SpiderD EM EX | |
|-----------------------------|-----------------|------|
| T5-3B (w deepspeed) | 73.2 | 77.4 |
| T5-3B (w/o deepspeed) | 76.0 | 79.8 |
| T5-3B + Tok (w deepspeed) | 74.4 | 78.7 |
| T5-3B + Tok (w/o deepspeed) | 77.4 | 80.9 |
are shown in Table 7. Our results show that T53B+Tok has a performance gain of 1.1%, indicating that it is helpful for larger LMs as well. Additionally, we also provide results with and without using DeepSpeed (2023), a deep learning optimization library that is used to train large models more efficiently. Surprisingly, although DeepSpeed (2023)
helped us improve training speed, we found a performance drop of around 2.1-2.2% EX while using it. However, our token preprocessing consistently leads to around 1.0% absolute performance gain.
## C **Component Boundary Marking Details**
In Table 8, we present one more example of component boundary marking. In this example, the NL
component *"What is the most populace city"* is aligned with two non-continuous SQL segments,
"select city.Name, city.Population" and
"order by city.Population desc limit 1".
To handle such cases, we apply the same pair of special tokens "[sep0]" "[/sep0]" twice, one for each segment.
| Component Boundary Marking Example Before: What is the most populace city that speaks English? Select city.Name, city.Population where countrylanguage.Language = "English" order by city.Population desc limit 1 After: [sep0] What is the most populace city [/sep0] [sep1] that speaks English? [/sep1] [sep0] select city.Name , city.Population [/sep0] [sep1] where countrylanguage.Language = "English" [/sep1] [sep0] order by city.Population desc limit 1 [/sep0] |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
## D Implementation Details
Our experiments are conducted based on the pretrained T5 model. The input to T5 follows the same format and order as Scholak et al. (2021) (except our additional token preprocessing, if applied), i.e.,
"Question | Database 1 | Table 1: Column 1, Column 2,...| Table 2: Column 1, Column 2...". We also use the database contents as parts of the input, following Lin et al. (2020). For example, if the NL question mentions a literal value
(e.g., "New York"), the appearance of whom can be found in the contents of a certain "Column 1" via fuzzy string matching, then when we represent the database schema, we will include it via "Database 1 | Table 1: Column 1 (New York), Column 2, ...".
We fine-tune the T5-base LM that consists of 220 million parameters on NVIDIA A100 GPU for 10-12 hours. It was trained with a learning rate of 10−4and batch size 16 for T5-base for a maximum of 20K training steps. The model is evaluated on SpiderD for every 1K training steps, and the best checkpoint is selected based on the model EM on SpiderD. In inference time, we perform simple greedy decoding.
We use the PyTorch-Transformers library (Wolf et al., 2020), which is a library for state-of-theart pre-trained models for NLP, to fine-tune our models. Specifically, our code for fine-tuning T5base is adapted from PICARD's implementation
(Scholak et al., 2021). Furthermore, we also use DeepSpeed (2023) to fine-tune all of our T5-base models.
Datasets. We used Spider (Yu et al., 2018b), NatSQL (Gan et al., 2021), Spider-CG (Gan et al.,
2022), and SMCalFlow-CS (Yin et al., 2021) in our work. They are under the license of CC BY-SA
4.0. Our use of these datasets is consistent with their intended use, i.e., for scientific research. All datasets are in English. They contain annotated NL
and SQL or NatSQL or LISP expression pairs from the open domain.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✗ A2. Did you discuss any potential risks of your work?
We don't see the potential of how our two techniques can be misused.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4
✓ B1. Did you cite the creators of artifacts you used?
2, 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
B
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
B
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Sensitive contents are unlikely to be contained in the datasets we used. For example, for Spider-CG,
it is annotated by domain experts.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
B
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 4.1
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4.1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
li-etal-2023-hipool | {H}i{P}ool: Modeling Long Documents Using Graph Neural Networks | https://aclanthology.org/2023.acl-short.16 | Encoding long sequences in Natural Language Processing (NLP) is a challenging problem. Though recent pretraining language models achieve satisfying performances in many NLP tasks, they are still restricted by a pre-defined maximum length, making them challenging to be extended to longer sequences. So some recent works utilize hierarchies to model long sequences. However, most of them apply sequential models for upper hierarchies, suffering from long dependency issues. In this paper, we alleviate these issues through a graph-based method. We first chunk the sequence with a fixed length to model the sentence-level information. We then leverage graphs to model intra- and cross-sentence correlations with a new attention mechanism. Additionally, due to limited standard benchmarks for long document classification (LDC), we propose a new challenging benchmark, totaling six datasets with up to 53k samples and 4034 average tokens{'} length. Evaluation shows our model surpasses competitive baselines by 2.6{\%} in F1 score, and 4.8{\%} on the longest sequence dataset. Our method is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences. | # Hipool: Modeling Long Documents Using Graph Neural Networks
Irene R. Li1, Aosong Feng2, Dragomir Radev2**, Rex Ying**2 1University of Tokyo, 2Yale University [email protected], {aosong.feng, dragomir.radev, rex.ying}@yale.edu
## Abstract
Encoding long sequences in Natural Language Processing (NLP) is a challenging problem.
Though recent pretraining language models achieve satisfying performances in many NLP
tasks, they are still restricted by a pre-defined maximum length, making them challenging to be extended to longer sequences. So some recent works utilize hierarchies to model long sequences. However, most of them apply sequential models for upper hierarchies, suffering from long dependency issues. In this paper, we alleviate these issues through a graph-based method. We first chunk the sequence with a fixed length to model the sentence-level information. We then leverage graphs to model intraand cross-sentence correlations with a new attention mechanism. Additionally, due to limited standard benchmarks for long document classification (LDC), we propose a new challenging benchmark, totaling six datasets with up to 53k samples and 4034 average tokens' length. Evaluation shows our model surpasses competitive baselines by 2.6% in F1 score, and 4.8% on the longest sequence dataset. Our method is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences.
## 1 Introduction
Transformer-based models like BERT (Vaswani et al., 2017a) and RoBERTa (Zhuang et al., 2021)
have achieved satisfying results in many Natural Language Processing (NLP) tasks thanks to largescale pretraining (Vaswani et al., 2017b). However, they usually have a fixed length limit, due to the quadratic complexity of the dense self-attention mechanism, making it challenging to encode long sequences.
One way to solve this problem is to adapt Transformers to accommodate longer inputs and optimize the attention from BERT (Feng et al., 2022; Jaszczur et al., 2021). BigBird (Zaheer et al., 2020)
applies sparse attention that combines random, global, and sliding window attention in a long sequence, reducing the quadratic dependency of full attention to linear. Similarly, Longformer (Beltagy et al., 2020) applies an efficient self-attention with dilated windows that scale linearly to the window length. Both models can take up to 4096 input tokens. Though it is possible to train even larger models for longer sequences, they are restricted by a pre-defined maximum length with poor scalability.
More importantly, they fail to capture high-level structures, such as relations among sentences or paragraphs, which are essential to improving NLP
system performance (Zhang et al., 2018; Zhu et al., 2019).
Another way is to apply a hierarchical structure to process adjustable input lengths with chunking representations for scalability on long sequences. Hi-Transformer (Wu et al., 2021) encodes both sentence-level and document-level representations using Transformers. ToBERT (Pappagari et al.,
2019) applies a similar approach that stacks a sentence-level Transformer over a pretrained BERT
model. While most of the existing work models upper-level hierarchy using *sequential structures*,
such as multiple layers of LSTMs (Hochreiter and Schmidhuber, 1997) or Transformers, this may still bring the long dependency issue when the sequence gets longer. To alleviate this, we investigate graph modeling as a novel hierarchy for upper levels.
Besides, we also consider inter-hierarchy relationships using a new attention mechanism.
Our key insight is to replace the sequence-based model with a hierarchical attentional graph for long documents. We first apply a basic pretrained language model, BERT or RoBERTa, to encode local representation on document chunks with a fixed length. The number of chunks could be extended for longer sequences for better scalability. Different from other works, we apply a graph neural network (GNN) (Zhou et al., 2018) to model the upper-level hierarchy to aggregate local sentence in161 Multi-level hierarchies for long sequence tasks with a novel inter-hierarchy graph attention structure.
Heterogeneous graph attention is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences. Overlapping Sequence Encoder
- Pretrained model for fixed sequence length.
- Possible to be extended to longer input.
HiPool Graph Encoder
- Each layer and convolution formation. This is to alleviate the long dependency issue of the sequential model. Moreover, within such a graph structure, we propose a new heterogeneous attention mechanism to consider intra- and cross- sentence-level correlations.
Our contributions are two-fold: 1) We propose HiPool with multi-level hierarchies for long sequence tasks with a novel inter-hierarchy graph attention structure. Such heterogeneous graph attention is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences; 2) We benchmark the LDC (long document classification) task with better scaled and length-extended datasets. Evaluation shows that HiPool surpasses competitive baselines by 2.6% in F1 score, and 4.8% on the longest sequence dataset. Code is available at https:
//github.com/IreneZihuiLi/HiPool.
- Node clustering:
- Updating the nodes with information exchange Linear Layer
- Cross entropy loss for training HiPool model illustration.
Figure 1: HiPool model illustration. It consists of a sequence encoder, HiPool graph encoder and a linear layer.
## 2 Model
We introduce the HiPool (Hierarchical **Pool**ing)
model for long document classification, illustrated in Fig. 1. It consists of an overlapping sequence encoder, a HiPool graph encoder, and a linear layer.
Overlapping Sequence Encoder. Given the input document S, we first chunk the document into a number of shorter pieces with a fixed length L,
and we set the overlapping window size to be Lolp.
Overlapping encoding makes it possible for a chunk to carry information from its adjacent chunks but not isolated, differentiating our model from other hierarchical ones. Then each chunk is encoded with a pretrained Transformer model, i.e., BERT or RoBERTa; we choose the CLS token representation as the input to our HiPool layer: X = BERT(S).
HiPool Graph Encoder. We apply a graph neural network to encode incoming word-level information. Such a model has shown its potential in some NLP tasks (Li et al., 2022, 2021). We construct a graph, defined by G(*V, E*), where V is a set of nodes, and E is a set of node connections.
There are two node types: n *low-level nodes* and m *high-level nodes*, and typically *m < n*. In our experiment, we set m = n/p, and p ≥ 0. The feedforward operation goes from low- to high-level nodes. In layer l, low-level nodes are inputs from the previous layer l − 1, while high-level nodes at layer l are computed based on low-level ones.
Moreover, these high-level nodes will be the input to the next layer l + 1, becoming the low-level nodes in that layer. We consider X the low-level
Proposed Method
![1_image_0.png](1_image_0.png)
nodes in the first HiPool layer, as shown in the figure.
In each HiPool layer, given node representation Hland adjacency matrix Alat layer l, the task is to obtain Hl+1:
Hl+1 = HiPool(Hl, Al). (1)
Inspired by DiffPool (Ying et al., 2018), we conduct a clustering method to aggregate information.
We assign node clusters with a fixed pattern based on their position. For example, adjacent low-level neighbors should map to the same high-level clustering node. So we first define a clustering adjacency matrix A*self* ∈ IRn×m that maps n nodes to m nodes, indicating the relations from low- to high- level nodes, marked as black arrows in the figure. Note that our approach allows overlapping, in which some nodes may belong to two clusters.
We set the clustering sliding window to be 2p, with a stride to be p. In the figure, we show the case of p = 2. We denote interactions between low-level nodes by the adjacency matrix Al, 1and we model it using a chain graph, according to the natural order of the document.2 Then, the relations between high-level nodes Al*high* and their node representations Hl*high* are computed:A
$$\begin{array}{l}{{A_{h i g h}^{l}=A_{s e l f}^{T}A^{l}A_{s e l f},}}\\ {{H_{h i g h}^{l}=A_{s e l f}H^{l}.}}\end{array}\qquad\qquad(2)$$
Besides, for each high-level node, to strengthen the connections across different clusters, we propose an attention mechanism to obtain crosssentence information. We propose a new edge type that connects external cluster low-level nodes to each high-level node, and the adjacency matrix is simply A*cross* = 1−A*self* , marked by green in the figure. We update Hl*high* as the following:
$$\begin{array}{c}\mbox{\it high}\\ \mbox{\it Wscore}=H^{l}_{self}W_{atten}(H^{l})^{T},\\ \mbox{\it Wscore}=W_{score}A^{T}_{cross},\\ \mbox{\it H}^{l}_{high}\gets W_{score}H^{l}+H^{l}_{high},\end{array}\tag{3}$$
where W*atten* is trainable, and W*score* is a scoring matrix. We then apply a GNN to obtain Hl+1.
For example, a graph convolution network (GCN)
(Kipf and Welling, 2016):
$$H^{l+1}=\mathrm{\bfGCN}(H_{h i g h}^{l},A_{h i g h}^{l}).$$
We run our experiments with two layers, and apply a sum aggregator to achieve document embeddings. More HiPool layers are also possible.
Linear Layer. Finally, a linear layer is connected and cross-entropy loss is applied during training.
## 3 Experiments 3.1 Ldc Benchmark
The LDC benchmark contains six datasets. We first choose four widely-used public datasets. **Hyperpartisan** (HYP) (Kiesel et al., 2019) and **20NewsGroups** (20NG) (Lang, 1995) are both news text datasets with different scales. **IMDB** (Maas et al.,
2011) is a movie review dataset for sentiment classification. **ILDC** (Malik et al., 2021) is a large corpus of legal cases annotated with binary court decisions ("accepted"and "rejected").
Limitation and new datasets. However, 20NewsGroups and IMDB cannot test the limit of models in encoding long documents since the average length of sentence is still relatively small; whereas Hyperpartisan only contains 645 examples and is thus prone to overfitting and not representative. ILDC
is large and contains long texts, but it is mainly in the legal domain. Therefore, to enrich evaluation scenario, we select and propose two new benchmarks with longer documents based on an existing large-scale corpus, Amazon product reviews
(He and McAuley, 2016), to conduct long document classification. **Amazon-512** (A-512) contains all reviews that are longer than 512 words from the *Electronics* category; **Amazon-2048** (A-2048)
| HYP | 20NG | IMDB | A-512 | A-2048 | ILDC | |
|-------|--------|---------|---------|----------|----------|---------|
| Mean | 741.44 | 587.56 | 301.14 | 879.62 | 2,915.03 | 4039.85 |
| Max | 5,368 | 144,592 | 3,152 | 17,988 | 14,120 | 501,091 |
| Min | 21 | 37 | 10 | 512 | 2,048 | 53 |
| Med. | 547 | 360 | 225 | 725 | 2,505 | 2,663 |
| 95pt. | 2,030 | 1,229 | 771 | 1,696 | 5,216 | 11,416 |
| Total | 645 | 18,846 | 50,000 | 53,471 | 10,000 | 34,816 |
| Class | 2 | 20 | 2 | 5 | 5 | 2 |
Table 1: Dataset statistics on LDC benchmark. Med.
is the median value. 95pt. indicates 95th percentile.
Class indicates the number of classes.
contains 10,000 randomly sampled reviews that are longer than 2048 words from the *Books* category. We randomly split 8/1/1 as train/dev/test sets for both datasets. The proposed datasets enable us to draw statistically significant conclusions on model performance as sequence lengths increase, as demonstrated in in Table 1.
## 3.2 Evaluation
Hyperparameters. We list details in Appendix C.
Baselines. We select four pretrained models:
BERT (Devlin et al., 2019), RoBERTa (Zhuang et al., 2021), BigBird (Zaheer et al., 2020) and Longformer (Beltagy et al., 2020). We also compare with a hierarchical Transformer model ToBERT (Pappagari et al., 2019). Hi-Transformer
(Wu et al., 2021) failed to be reproduced as there is no code available. We evaluate two variations of our HiPool method by changing the sequence encoder model: HiPool-BERT and HiPool-RoBERTa.
We report the Micro-F1 score in Tab. 2.
Main Results. Among the pretrained models, Longformer and BigBird perform better than BERT
and RoBERTa. ToBERT can only surpass BERT as it is a hierarchical model that applies BERT as its text encoder. On average, HiPool-BERT improves significantly on BERT by 5.9% and on ToBERT
by 3%. Compared to ToBERT, the superior performance of HiPool can be explained by the fact that sentence-level representations in ToBERT fails to capture cross-sentence information. HiPool surpasses baselines on A-512, A-2048 and ILDC that contain longer sequences. Notably, the best model, HiPool-RoBERTa, outperforms BigBird by 4.8%
on ILDC. While our model applies a basic pretrained text encoder (the maximum length is 512),
it can still surpass larger pretrained language models (i.e., the maximum length is 4096). Although HiPool is worse on HYP and IMDB, we note that HYP only has 65 examples in testing and is prone to overfitting. We further show that even in IMDB,
HiPool still out-performs the best model for long
| HYP | 20NG | IMDB | A-512 | A-2048 | ILDC | Avg. | |
|-------------------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------|
| BERT | 0.857 | 0.853 | 0.913 | 0.592 | 0.503 | 0.556 | 0.712 |
| RoBERTa | 0.874 | 0.857 | 0.953 | 0.650 | 0.579 | 0.560 | 0.745 |
| BigBird | 0.922 | 0.823 | 0.952 | 0.674 | 0.636 | 0.637 | 0.774 |
| Longformer | 0.938 | 0.863 | 0.957 | 0.673 | 0.612 | 0.562 | 0.768 |
| ToBERT | 0.862 | 0.901 | 0.924 | 0.587 | 0.560 | 0.611 | 0.741 |
| HiPool-BERT | 0.865±0.030 | 0.908±0.005 | 0.931±0.001 | 0.660±0.009 | 0.612±0.011 | 0.651±0.010 | 0.771 |
| HiPool-RoBERTa | 0.886±0.018 | 0.904±0.001 | 0.948±0.001 | 0.690±0.007 | 0.648±0.017 | 0.685±0.018 | 0.794 |
| Table 2: Main evaluation results on LDC benchmark. We underscore the best average of baselines, and bold the best | | | | | | | |
Table 2: Main evaluation results on LDC benchmark. We underscore the best average of baselines, and bold the best
overall models.
| Hierarchy | F1 | Hierarchy | F1 |
|-------------|-------|-------------|-------|
| Sequential | Graph | | |
| Simple | 0.618 | Aggr-mean | 0.621 |
| CNN | 0.608 | Aggr-std | 0.620 |
| Trans. | 0.560 | Aggr-pna | 0.633 |
| HiPool | 0.648 | | |
## Sequence In Appendix A.
Hierarchy variations. To further compare sequential and graph hierarchy, we keep the word encoder and replace the HiPool graph encoder with the following sequential modules: Simple linear summation over low-level nodes; CNN applies a 1-dimension convolution; Trans is to apply a Transformer on top of low-level nodes. Besides, we also look at multiple graph settings: Aggr-mean is to use a mean aggregator to obtain the final document representation; Aggr-std is to use a feature-wise standard deviation aggregator; finally, Aggr-pcp applies Principal Neighbourhood Aggregation (PNA) (Corso et al., 2020). We report results on Amazon-2048 in Tab. 3, as it has the longest sequence on average. An observation is that applying aggregators are better than simpler structures, while keeping a graph is still a better choice. HiPool also considers attention in message passing, so it is doing even better. We also test other variations in Appendix B.
## 3.3 Ablation Study
Effect of input length. To better understand the effect of input length, in Fig. 2, we present an ablation study on the Amazon-2048 and ILDC, and compare three models: BigBird, Longformer, and HiPool. In general, the models benefit from longer input sequences in both datasets. Interestingly, when sequence is larger than 2048, Longformer and Bigbird could not improve and they are limited in maximum lengths. In contrast, as the input sequence gets longer, HiPool steadily improves,
![3_image_0.png](3_image_0.png)
| A-512 | A-2048 | ILDC | Avg. | |
|-----------------|----------|--------|--------|-------|
| HiPool-RoBERTa | 0.690 | 0.648 | 0.685 | 0.674 |
| w/o RoBERTa | 0.660 | 0.612 | 0.651 | 0.641 |
| w/o HiPool | 0.601 | 0.578 | 0.620 | 0.600 |
| w/o Overlapping | 0.587 | 0.560 | 0.611 | 0.586 |
showing its ability to encode long documents in a hierarchical structure.
Model component. Next, we look at how each component of HiPool affects performance. As shown in Tab. 4, we first take the best model setting, HiPool-RoBERTa, and compare it with the following settings: 1) w/o RoBERTa is to replace RoBERTa with BERT, then the model becomes HiPool-BERT; 2) w/o HiPool is to remove the proposed HiPool module and replace with a simple CNN (Kim, 2014); 3) w/o Overlapping is to remove the overlapping word encoding. We could see that removing the HiPool Layer leads to a significant drop, indicating the importance of the proposed method. Moreover, the HiPool framework can work with many pretrained language models, as we can see that applying RoBERTa improves BERT. A complete result table can be found in Appendix.
## 4 Conclusion
In this paper, we proposed a hierarchical framework for long document classification. The evaluation shows our model surpasses competitive baselines.
## 5 Limitations And Potential Risks
Limitations The model we proposed is specifically for classification, while it is possible to be extended to other NLP tasks by changing the highlevel task-specific layer. Besides, in the evaluation, we focused on English corpora. We plan to test on other languages in the future.
Potential Risks We make our code publicly available so that everyone can access our code. As the model is a classification model, it does not generate risky content. Users should also notice that the classification predictions may not be perfectly correct.
## 6 Acknowledgements
This paper is dedicated to the memory of Professor
![4_image_0.png](4_image_0.png)
Dragomir Radev, who passed away while this paper was being peer-reviewed.
## References
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020.
Longformer: The long-document transformer. *CoRR*,
abs/2004.05150.
Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Velickovi ˇ c. 2020. ´ Principal neighbourhood aggregation for graph nets.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of
the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Aosong Feng, Irene Li, Yuang Jiang, and Rex Ying.
2022. Diffuser: Efficient transformers with multihop attention diffusion for long sequences. arXiv preprint arXiv:2210.11794.
William L. Hamilton, Zhitao Ying, and Jure Leskovec.
2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1024–1034.
Ruining He and Julian J. McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In *Proceedings of the 25th International Conference on* World Wide Web, WWW 2016, Montreal, Canada, April 11 - 15, 2016, pages 507–517. ACM.
Sepp Hochreiter and Jürgen Schmidhuber. 1997.
Long short-term memory. *Neural Comput.*,
9(8):1735–1780.
Sebastian Jaszczur, Aakanksha Chowdhery, Afroz Mohiuddin, Lukasz Kaiser, Wojciech Gajewski, Henryk Michalewski, and Jonni Kanerva. 2021. Sparse is enough in scaling transformers. Advances in Neural Information Processing Systems, 34:9895–9907.
Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 829–839, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Yoon Kim. 2014. Convolutional neural networks for sentence classification. In *Proceedings of the* 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. *arXiv preprint arXiv:1609.02907*.
Ken Lang. 1995. Newsweeder: Learning to filter netnews. In *Machine Learning, Proceedings of the* Twelfth International Conference on Machine Learning, Tahoe City, California, USA, July 9-12, 1995, pages 331–339. Morgan Kaufmann.
Irene Li, Linfeng Song, Kun Xu, and Dong Yu. 2022.
Variational graph autoencoding as cheap supervision for AMR coreference resolution. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2790–2800, Dublin, Ireland. Association for Computational Linguistics.
Irene Li, Vanessa Yan, Tianxiao Li, Rihao Qu, and Dragomir Radev. 2021. Unsupervised cross-domain prerequisite chain learning using variational graph autoencoders. *arXiv preprint arXiv:2105.03505*.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Vijit Malik, Rishabh Sanjay, Shubham Kumar Nigam, Kripabandhu Ghosh, Shouvik Kumar Guha, Arnab Bhattacharya, and Ashutosh Modi. 2021. ILDC for CJPE: Indian legal documents corpus for court judgment prediction and explanation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 4046–4062, Online.
Association for Computational Linguistics.
Raghavendra Pappagari, Piotr Zelasko, Jesús Villalba, Yishay Carmiel, and Najim Dehak. 2019. Hierarchical transformers for long document classification.
In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 838–844.
IEEE.
Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In *The Semantic Web - 15th* International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, volume 10843 of *Lecture Notes in Computer Science*, pages 593–607. Springer.
Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Don Metzler. 2021. Long range arena : A benchmark for efficient transformers. In ICLR 2021.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017a. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser, and Illia Polosukhin. 2017b. Attention is all you need.
Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021. Hi-transformer: Hierarchical interactive transformer for efficient and effective long document modeling. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021,
(Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 848–853. Association for Computational Linguistics.
Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec.
2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 4805–4815.
Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In *Advances in Neural* Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Tianyang Zhang, Minlie Huang, and Li Zhao. 2018.
Learning structured representation for text classification via reinforcement learning. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 32.
Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. 2018. Graph neural networks: A review of methods and applications.
CoRR, abs/1812.08434.
Ming Zhu, Aman Ahuja, Wei Wei, and Chandan K
Reddy. 2019. A hierarchical attention retrieval model for healthcare question answering. In The World Wide Web Conference, pages 2472–2482.
Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun. 2021. A
robustly optimized BERT pre-training approach with post-training. In Proceedings of the 20th Chinese National Conference on Computational Linguistics, pages 1218–1227, Huhhot, China. Chinese Information Processing Society of China.
## A Imdb-Long Dataset
HiPool Performs The Best for Long Sequences in IMDB. As a supplementary analysis, we look at the IMDB dataset, in which HiPool performs worse than BigBird and Longformer. We filter out the sequences that are longer than 512 tokens to construct the **IMDB-long** dataset, resulting in 3250 and 3490 samples for training and testing. We show the detailed statistics of the IMDB-long dataset in Tab. 5. We show the evaluation in Fig. 3. We can observe that HiPool can do better for long sequences.
| Train | Test | |
|------------|--------|--------|
| Mean | 761.35 | 764.65 |
| Max | 2,977 | 3,152 |
| Min | 512 | 512 |
| Med | 689 | 693 |
| 50th pctl. | 689 | 693 |
| 95th pctl. | 1,236 | 1,232 |
| Total | 3,250 | 3,490 |
Table 5: IMDB-long dataset statistics.
![6_image_0.png](6_image_0.png)
F1
## B Graph Variations
We study other possible GNN types for hierarchy modeling. In Eq. 1, we replace the HiPool graph encoder with a GCN or GAT encoder. We apply two layers of the graph networks before the linear layer to compare fairly, and show results in Fig. 6. We notice that using GCN and GAT results in lower performance than that of HiPool. A possible reason is that they only focus on modeling the low-level nodes, ignoring a cross-sentence attention mechanism to strengthen high-level communication on long sequences like HiPool.
HYP 20NG IMDB A-512 A-2048 ILDC **Avg.**
BERT-GCN 0.859 0.904 0.927 0.645 0.591 0.623 0.758 BERT-GAT 0.846 0.907 0.929 0.653 0.602 0.626 0.760
BERT-HiPool 0.865 0.908 0.931 0.660 0.612 0.651 **0.771**
RoBERTa-GCN 0.874 0.903 0.944 0.670 0.631 0.656 0.780 RoBERTa-GAT 0.849 0.899 0.945 0.678 0.640 0.673 0.781
RoBERTa-HiPool 0.886 0.904 0.948 0.690 0.648 0.690 **0.794**
Table 6: Comparison of other GNN types: we report F1 scores for individual dataset and the average. HiPool method is better than GCN and GAT.
## C Hyperparameters, Experimental Settings
We run our experiments on 4 NVIDIA RTX A6000 GPUs, with the memory to be 48GB. We list hyperparameters for baselines and HiPool model in Tab. 7. For all datasets, we apply Adam optimizer
(Kingma and Ba, 2014) for all experiments. For HiPool, we set the chunk length L = 300, and the overlapping length Lolp is L/2 = 150. We apply two layers of HiPool, reducing the number of nodes for each layer by p = 2. Among the baseline models, ToBERT (Pappagari et al., 2019) is adjustable for the maximum length, because it takes the maximum value in a batch during training. We evaluated F1 scores using scikit-learn: https://scikit-learn.org/stable/.
BERT, RoBERTa 20
max_len 512 512 512 512 512 512
#epoch 10 10 10 10 10 10
learning rate 5e-6 5e-6 5e-6 5e-6 5e-6 5e-6
| HYP | 20NG | IMDB | A-512 | A-1024 | ILDC | Time* |
|-------|--------|--------|---------|----------|--------|---------|
BigBird, Longformer 40
max_len 1024 1024 1024 2048 4096 4096
#epoch 10 10 10 10 10 10 learning rate 5e-6 5e-6 5e-6 5e-6 5e-6 5e-6
ToBERT 25
#epoch 8 10 10 12 12 12
learning rate 1e-5 1e-5 1e-5 1e-5 1e-5 1e-5
HiPool 50×5
#max_node 10 8 8 10 15 15 #epoch 8 10 10 12 12 12 learning rate: BERT 1e-5 1e-5 1e-5 1e-5 1e-5 5e-6 learning rate: RoBERTa 5e-6 5e-6 5e-6 5e-6 5e-6 5e-6
Table 7: Hyperparameters for baseline models and HiPool. Time* indicates how many hours on overall trial, training and testing using a single GPU. Note that we report average and standard deviation for HiPool, so we ran the evaluation at least 5 times there.
## D Frequently Asked Questions - Q: Why Do We Call It A Heterogeneous Graph?
A: We use the term "heterogeneous"to distinguish the nodes from the graph. We wish to emphasize that the nodes are not the same, and they come from multiple levels and represent different information.
- Q: *Are there other possible variations for modeling the hierarchy?*
A: Yes, our HiPool model is a framework that applies a graph structure for high-level hierarchy, so it is possible to apply other GNN models. One can use Relational Graph Convolutional Networks
(R-GCNs) (Schlichtkrull et al., 2018) to model the different relations for A*self* and A*cross*. Besides, some inductive methods like GraphSAGE (Hamilton et al., 2017) can also be applied to obtain node embeddings in the graph. We leave this topic as future work.
- Q: How does the aggregator work in Tab. 3.?
A: We replace the sum aggregator of our original HiPool with those mentioned aggregators. The applied PyTorch implementation: https://pytorch-geometric.readthedocs.io/en/
latest/modules/nn.html\#aggregation-operators.
- Q: Why did not evaluate on the LRA (Long Range Arena) (Tay et al., *2021) benchmark?*
A: LRA is more suitable for testing the efficiency of Transformer-based models and it consists of multiple types of long sequences. As we mentioned in the Introduction, our proposed model belongs to another category for long sequence encoding, not the efficiency transformer category that focuses on optimizing KQV attention.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Appendix Section E
✓ A2. Did you discuss any potential risks of your work?
Appendix Section E
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section I
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 3
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 3
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3, Appendix A
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix C, Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix C, Section 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix B, C, Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix C,D, Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
yoder-etal-2023-weakly | A Weakly Supervised Classifier and Dataset of White Supremacist Language | https://aclanthology.org/2023.acl-short.17 | We present a dataset and classifier for detecting the language of white supremacist extremism, a growing issue in online hate speech. Our weakly supervised classifier is trained on large datasets of text from explicitly white supremacist domains paired with neutral and anti-racist data from similar domains. We demonstrate that this approach improves generalization performance to new domains. Incorporating anti-racist texts as counterexamples to white supremacist language mitigates bias. |
## A Weakly Supervised Classifier And Dataset Of White Supremacist Language
Michael Miller Yoder1 Ahmad Diab2 David West Brown3 **Kathleen M. Carley**1 1Software and Societal Systems Department, Carnegie Mellon University 2Department of Computer Science, University of Pittsburgh 3Department of English, Carnegie Mellon University Pittsburgh, Pennsylvania, USA
[email protected] [email protected] [email protected] [email protected]
## Abstract
We present a dataset and classifier for detecting the language of white supremacist extremism, a growing issue in online hate speech.
Our weakly supervised classifier is trained on large datasets of text from explicitly white supremacist domains paired with neutral and anti-racist data from similar domains. We demonstrate that this approach improves generalization performance to new domains. Incorporating anti-racist texts as counterexamples to white supremacist language mitigates bias.
## 1 Introduction
The spread of white supremacist extremism online has motivated offline violence, including recent mass shootings in Christchurch, El Paso, Pittsburgh, and Buffalo. Though some research in natural language processing has focused on types of hate speech, such as anti-Black racism (Kwok and Wang, 2013) and misogyny (Fersini et al., 2018),
little work has focused on detecting specific hateful ideologies. Practitioners have called for such systems, particularly for white supremacism (ADL, 2022; Yoder and Habib, 2022).
To detect white supremacist language, we build text classifiers trained on data from a large, diverse set of explicitly white supremacist online spaces, filtered to ideological topics.1In a weakly supervised set-up, we train discriminative classifiers to distinguish texts in white supremacist domains from texts in similar online spaces that are not known for white supremacism. These classifiers outperform prior work in white supremacist classification on three annotated datasets, and we find that the best-performing models use a combination of weakly and manually annotated data.
Hate speech classifiers often have difficulty generalizing beyond data they were trained on (Swamy 1See https://osf.io/274z3/ to access public parts of this dataset and others used in this paper.
et al., 2019; Yoder et al., 2022). We evaluate our classifiers on unseen datasets annotated for white supremacism from a variety of domains and find strong generalization performance for models that incorporate weakly annotated data.
Hate speech classifiers often learn to associate any mention of marginalized identities with hate, regardless of context (Dixon et al., 2017). To address this potential issue with white supremacist classification, we incorporate anti-racist texts, which often mention marginalized identities in positive contexts, as counter-examples to white supremacist texts. Evaluating on a synthetic test set with mentions of marginalized identities in a variety of contexts (Röttger et al., 2021), we find that including anti-racist texts helps mitigate this bias.
## 2 The Language Of White Supremacist Extremism
This work focuses on white supremacist extremism, social movements advocating for the superiority of white people and domination or separation from other races (Daniels, 2009). This fringe movement both exploits the bigotries widely held in societies with structural white supremacism and makes them explicit (Ferber, 2004; Berlet and Vysotsky, 2006; Pruden et al., 2022). Key beliefs of white supremacist extremism are that race and gender hierarchies are fixed, that white people's "natural" power is threatened, and that action is needed to protect the white race (Ferber and Kimmel, 2000; Brown, 2009; Perry and Scrivens, 2016; Ansah, 2021).
Many qualitative studies have examined the language of white supremacism (Thompson, 2001; Duffy, 2003; Perry and Scrivens, 2016; Bhat and Klein, 2020). Computational models have been developed to identify affect (Figea et al., 2016),
hate speech (de Gibert et al., 2019), and violent intent (Simons and Skillicorn, 2020) within white supremacist forums.
Two other studies have built models to detect white supremacist ideology in text. Alatawi et al.
(2021) test Word2vec/BiLSTM models, pre-trained on a corpus of unlabeled white supremacist forum data, as well as BERT models. To estimate the prevalence of white supremacism on Twitter after the 2016 US election, Siegel et al. (2021)
build a dictionary-based classifier and validate their findings with unlabeled alt-right Reddit data. In contrast, we use a large, domain-general white supremacist corpus with carefully selected negative training examples to build a weakly supervised discriminative classifier for white supremacism.
## 2.1 Hate Speech And White Supremacism
The relationship between hate speech and white supremacism has been theorized and annotated in different ways. Some have annotated the glorification of ideologies and groups such as Nazism and the Ku Klux Klan separately from hate speech (Siegel et al., 2021; Rieger et al., 2021),
which is often defined as verbal attacks on groups based on their identity (Sanguinetti et al., 2018; Poletto et al., 2021; de Gibert et al., 2019). A user of Stormfront, a white supremacist forum, notes this distinction to evade moderation on other platforms:
"Nationalist means defending the white race; racist means degrading non-white races. You should be fine posting about preserving the white race as long as you don't degrade other races."2 We aim to capture the expression of white supremacist ideology beyond just hate speech against marginalized identities (see Figure 1). In contrast, de Gibert et al. (2019) ask annotators to identify hate speech within a white supremacist forum. They note that some content that did not fit strict definitions of hate speech still exhibited white supremacist ideology. Examples of this from data used in the current paper include "diversity means chasing down whites" (white people being threatened) and "god will punish as he did w/ hitler"
(action needed to protect white people).
## 3 Weakly Annotated Data
It is difficult for annotators to determine whether the short texts commonly used in NLP and computational social science, such as tweets, express white supremacism or other far-right ideologies. Alatawi et al. (2021) struggle to reach adequate 2Quotes in this paper are paraphrased for privacy (Williams et al., 2017)
![1_image_0.png](1_image_0.png)
inter-annotator agreement on white supremacism in tweets. Hartung et al. (2017) note that individual tweets are difficult to link to extreme right-wing ideologies and instead choose to annotate user tweet histories.
Instead of focusing on individual posts, we turn to *weak supervision*, approaches to quickly and cheaply label large amounts of training data based on rules, knowledge bases or other domain knowledge (Ratner et al., 2017). Weakly supervised learning has been used in NLP for tasks such as cyberbullying detection (Raisi and Huang, 2017),
sentiment analysis (Kamila et al., 2022), dialogue systems (Hudecek et al. ˇ , 2021) and others (Karamanolakis et al., 2021). For training the discriminative white supremacist classifier, we draw on three sources of text data with "natural" (weak) labels:
white supremacist domains and organizations, neutral data with similar topics, and anti-racist blogs and organizations.
## 3.1 White Supremacist Data
We sample existing text datasets and data archives from white supremacist domains and organizations to build a dataset of texts that likely express white supremacist extremism. Table 1 details information on source datasets.
Sources include sites dedicated to white supremacism, such as Stormfront, Iron March, and the Daily Stormer. When possible, we filter out non-ideological content on these forums using existing topic structures, for example, excluding
| Data source | Platform | # Posts | Excerpt from example post |
|-------------------------|--------------------|-----------|------------------------------------------|
| Papasavva et al. (2020) | 4chan | 2,686,267 | africans are inferior animals |
| Stormfront archive | Stormfront | 751,980 | help the white race |
| Jokubauskaite | and | | |
| ˙ | 4chan | 578,650 | we need to drop the nazism no , we need |
| Peeters (2020) | to do the opposite | | |
| Iron March archive | Iron March | 179,468 | disgusting looking fat ch*nk cuckold |
| Qian et al. (2018) | Twitter | 84,695 | keep illegal immigrants out |
| Patriot Front archive | Discord | 39,577 | interracial dating i find that appalling |
| Calderón et al. (2021) | Daily Stormer, | 26,099 | black - on - white murders it never ends |
| Amer. Renaissance | | | |
| Pruden et al. (2022) | books, manifestos | 17,007 | preventing the ongoing islamisation |
| ElSherief et al. (2021) | Twitter | 3,480 | desert barbarians will destroy the west |
Table 1: Information on white supremacist corpus before filtering and sampling. *Warning: offensive examples.*
"Computer Talk" and "Opposing Views" forums on Stormfront. We also include tweets from organizations that the Southern Poverty Law Center labels as white supremacist hate groups (Qian et al., 2018; ElSherief et al., 2021). In Papasavva et al.'s (2020) dataset from the 4chan /pol/ "politically incorrect" imageboard, we select posts from users choosing Nazi, Confederate, fascist, and white supremacist flags. We also include 4chan /pol/ posts in "general" threads with fascist and white supremacist topics
(Jokubauskaite and Peeters ˙ , 2020). From Pruden et al. (2022), we include white supremacist books and manifestos. We also include leaked chats from Patriot Front, a white supremacist group. Details on these datasets can be found in Appendix A.
With over 230 million words in 4.3 million posts across many domains, this is the largest collection of white supremacist text we are aware of. Contents are from 1968 through 2019, though 76% of posts are from 2017-2019 (see distributions of posts over time in Appendix A).
Outlier filtering and sampling This large dataset from white supremacist domains inevitably contains many posts that are off-topic and nonideological. To build a weakly supervised classifier, we wish to further filter to highly ideological posts from a variety of domains.
We first remove posts with 10 or fewer words, as these are often non-ideological or require context to be understood (such as "reddit and twitter are cracking down today" or "poor alex, i feel bad").
We then select posts whose highest probability topic from an LDA model (Blei et al., 2003)
are ones that are more likely to express white supremacist ideology. LDA with 30 topics separated themes well based on manual inspection.
One of the authors annotated 20 posts from each topic for expressing a tenet of white supremacism, described in Section 2. We selected 6 topics with the highest annotation score for white supremacy, as this gave the best performance on evaluation datasets. These topics related to antisemitism, antiBlack racism, and discussions of European politics and Nazism (details in Appendix B). To balance forum posts with other domains and approximate domain distributions in neutral and anti-racist datasets, we randomly sample 100,000 forum posts.
This white supremacist corpus used in experiments contains 118,842 posts and 10.7 million words.
## 3.2 Neutral Data
We also construct a corpus of "neutral" (not white supremacist) data that matches the topics and domains of the white supremacist corpus. To match forum posts, we sample r/politics and r/Europe subreddits. To match tweets, we query the Twitter API by sampling the word distribution in white supremacist tweets after removing derogatory language. For articles, we sample random US news from the News on the Web (NOW) Corpus3, and use a random Discord dataset to match chat (Fan, 2021). For each of these domains, we sample the same number of posts per year as is present in the white supremacist corpus. If there is not significant time overlap, we sample enough posts to reach a similar word count. This corpus contains 159,019 posts and 8.6 million words.
## 3.3 Anti-Racist Data
Hate speech classifiers often overpredict mentions of marginalized identities as hate (Dixon et al.,
3https://www.corpusdata.org/now_corpus.asp 2017). Assuming our data is biased until proven innocent (Hutchinson et al., 2021), we design for this issue. We hypothesize that texts from anti-racist perspectives may help. Oxford Languages defines anti-racism as movements "opposing racism and promoting racial equality". Anti-racist communications often mention marginalized identities (as do white supremacist texts), but cast them in positive contexts, such as a tweet in our anti-racist dataset that reads, "stand up for \#immigrants".
We construct a corpus of anti-racist texts to match the domain and year distribution of the white supremacist corpus. For forum data, we sample comments in subreddits known for anti-racism: r/racism, r/BlackLivesMatter, and r/StopAntiAsianRacism. We include tweets from anti-racist organizations listed by the University of North Carolina Diversity and Inclusion office4.
To match articles, we scrape Medium blog posts tagged with "anti-racism", "white supremacy",
"racism", and "BlackLivesMatter". As with other corpora, data from each of these sources was inspected for its perspective. This anti-racist corpus contains 87,807 posts and 5.6 million words.
## 4 Classification
Due to the success of BERT-based hate speech models (Mozafari et al., 2019; Samghabadi et al.,
2020), we select the parameter-efficient DistilBERT model (Sanh et al., 2019) to compare data configurations5. We use a learning rate of 2×10−5, batch size of 16, and select the epoch with the highest ROC AUC on a 10% development set, up to 5 epochs. Training each model took approximately 8 hours on an NVIDIA RTX A6000 GPU.
We train models on binary white supremacist classification. All posts in the white supremacist corpus, after sampling and filtering, are labeled
'white supremacist'. Posts in neutral and anti-racist corpora are labeled 'not white supremacist'. We also test combining weakly labeled data with manually annotated data from existing datasets (see below) and our own annotation of white supremacist posts in LDA topics. Since there is relatively little manually annotated data, we duplicate it 5 times in these cases, to a size of 57,645 posts.
## 4.1 Evaluation
Evaluating weakly supervised classifiers on a heldout weakly supervised set may overestimate performance. Classifiers may learn the idiosyncrasies of domains known for white supremacy in contrast to neutral domains (4chan vs. Reddit, e.g.)
instead of learning distinctive features of white supremacy. We thus evaluate classifiers on their ability to distinguish posts manually annotated for white supremacy within the same domains, in the following 3 datasets:
Alatawi et al. **(2021)**: 1100 out of 1999 tweets
(55.0%) annotated as white supremacist. Like our work, they conceptualize white supremacy as including hate speech against marginalized groups.
Rieger et al. **(2021)**: 366 out of 5141 posts
(7.1%) from 4chan, 8chan, and r/the_Donald annotated as white supremacist. This work uses a more restricted definition of white supremacy largely distinct from hate speech. We sample examples labeled as white supremacist or neither white supremacist nor hate speech. Examples only annotated as hate speech are excluded since they may or may not fit our broader conception of white supremacism.
Siegel et al. **(2021)**: 171 out of 9743 tweets
(1.8%) annotated as white supremacist. Since they use a more restrictive definition of white supremacy, we sample posts annotated as white supremacist or neither white supremacist nor hate speech.
The proportions of white supremacist posts in these annotated evaluation datasets vary widely, so we report ROC AUC instead of precision, recall, or F1-score, which assume similar class proportions between training and test data (Ma and He, 2013).
Precision and recall curves are also available in Figure 5 in Appendix C.
Generalization evaluation To test the ability of classifiers to generalize, we perform a leave-oneout test among annotated datasets. During three runs for each model that uses manually annotated data, we train on two of the annotated datasets and test performance on the third. To test generalization to a completely unseen domain, we use a dataset of quotes from offline white supremacist propaganda, extracted from data collected by the Anti-Defamation League (ADL)6. 1655 out of 1798 quotes (92.0%) were annotated by two of the authors as exhibiting white supremacist ideology.
6https://www.adl.org/resources/tools-to-track
-hate/heat-map Baselines We evaluate our approaches against the best-performing model from Alatawi et al. (2021),
BERT trained on their annotated Twitter dataset for 3 epochs with a learning rate of 2×10−5and batch size of 16. We also compare against Siegel et al.
(2021), who first match posts with a dictionary and then filter out false positives with a Naive Bayes classifier. Though Rieger et al. (2021) also present data annotated for white supremacy, they focus on analysis and do not propose a classifier.
HateCheck evaluation for lexical bias To evaluate bias against mentions of marginalized identities, we use the synthetic HateCheck dataset (Röttger et al., 2021). We filter to marginalized racial, ethnic, gender and sexual identities, since white supremacy is a white male perspective interlinked with misogyny and homophobia (Ferber, 2004; Brindle, 2016).
We select sentences that include these identity terms in non-hateful contexts: neutral and positive uses; homonyms and reclaimed slurs; and counterspeech of quoted, referenced, and negated hate speech. This sample totals 762 sentences.
## 5 Results
Table 2 presents performance of single runs on randomly sampled 30% test sets from Alatawi et al.
(2021), Rieger et al. (2021), and Siegel et al. (2021).
Classifiers trained with both weakly annotated data and a combination of all manually annotated data average the best performance across evaluation datasets. On the Alatawi et al. (2021) dataset, their own classifier performs the best. All models have lower scores on this challenging dataset, which human annotators also struggled to agree on (0.11 Cohen's κ). In generalization performance (Table 3), we find that using weakly annotated data outperforms using only manually annotated data in almost all cases, and that combining weakly and manually annotated data enables classifiers to generalize most effectively.
## 5.1 Anti-Racist Corpus
Training with both neutral and anti-racist negative examples improves accuracy on the HateCheck dataset to 69.2 from 60.5 when using a similar number of only neutral negative examples. This supports our hypothesis that incorporating antiracist texts can mitigate bias against marginalized identity mentions. Adding anti-racist texts slightly decreases performance on the other 4 evaluation datasets, to 82.8 from 84.3 mean ROC AUC.
| Model | A | R | S | Mean |
|------------|------|------|------|--------|
| S | 60.3 | 61.8 | 61.3 | 61.2 |
| A | 74.0 | 81.2 | 89.7 | 81.6 |
| Annotated | 65.3 | 86.1 | 92.9 | 81.4 |
| Weak | 71.6 | 87.8 | 90.3 | 83.2 |
| Weak + Ann | 70.9 | 90.3 | 96.8 | 86.0 |
| Model | A | R | S | ADL |
|------------|------|------|------|-------|
| S | 56.3 | 61.9 | - | 57.2 |
| A | - | 81.9 | 83.9 | 89.1 |
| Annotated | 55.2 | 82.0 | 84.7 | 68.5 |
| Weak | 71.0 | 87.8 | 87.3 | 85.1 |
| Weak + Ann | 70.0 | 89.8 | 88.9 | 89.2 |
## 6 Conclusion
Ideologies such as white supremacy are difficult to annotate and detect from short texts. We use weakly supervised data from domains known for white supremacist ideology to develop classifiers that outperform and generalize more effectively than prior work. Incorporating texts from an antiracist perspective mitigates lexical bias.
To apply a white supremacist language classifier to varied domains, our results show the benefit of using such weakly supervised data, especially in combination with a small amount of annotated data.
Other methods for combining these data could be explored in future work, such as approaches that use reinforcement learning to select unlabeled data for training (Ye et al., 2020; Pujari et al., 2022).
Incorporating social science insights and looking for specific tenets of white supremacist extremism could also lead to better classification. This classifier could be applied to measure the prevalence or spread of white supremacist ideology through online social networks.
## Limitations
The presented classifier and dataset are only from English-speaking sources, a major disadvantage in detecting white supremacist content globally. The dataset also is predominantly sourced from data between 2015-2019 and reflects white supremacist extremist responses to current events from that period, including the Black Lives Matter movement. This limits its effectiveness in detecting white supremacist content from other time periods.
Though including anti-racist data helps mitigate bias tested by our sample of the HateCheck dataset, an accuracy of 69.2% shows room for improvement. There is still a risk of overclassifying posts with marginalized identity mentions as white supremacist.
## Ethics Statement
There are significant ethical issues to consider in developing text classifiers for ideologies. Since this research has clear social implications, we wish to be explicit about values and author positionality beyond a sense of "objectivity" in selecting research questions (Schlesinger et al., 2017; D'Ignazio and Klein, 2020; Waseem et al., 2021). The authors come from European- and American-dominated university contexts and consider working against racism and white supremacy a priority. Most identify as white and some identify as people of color.
This research proceeded with values of racial justice and places those values at the center of assessing knowledge claims (Collins, 1990; Daniels, 2009). Our choice of focusing on white supremacy among other ideologies stems from those values.
White supremacist extremism, as well as structural white supremacism, is responsible for substantial harms against those with marginalized identities.
This research responds to a need from practitioners for more nuanced classifiers than for broad categories of hate speech or abusive language. We thus choose to pursue this research, though caution that developing classifiers for other ideologies should be done with careful consideration and a clear statement of motivating values.
There are significant risks which we consider, and attempt to mitigate, in such a dataset and classifier. First, there is the risk of misuse of a large corpus of white supremacist data, as has been seen in building and releasing a hate speech "troll bot" from 4chan data7. For this reason we build a discriminative, not generative, classifier, and only plan on releasing our dataset through a vetting process instead of publicly.
There are also privacy risks in how such a classifier could be used. Our classifier only identifies language that is likely similar to white supremacist content. The intended use of this classifier is to measure the prevalence of such an ideology on particular platforms or within networks for research purposes, not to label individuals as holding or not holding white supremacist ideologies. Using the classifier for this purpose poses significant risks of misclassification and could increase harmful surveillance tactics. We strongly discourage such a use. Our hope is that our proposed classifier and dataset can increase knowledge about the nature and extent of white supremacist extremist movement online and can inform structural interventions, such as platform policies, not interventions against individuals.
Hate speech classifiers, developed by researchers with similar equity-based values, have been found to contain biases against marginalized groups (Sap et al., 2019; Davidson et al., 2019). We measure and mitigate this bias from the start by incorporating anti-racist data, though caution that this risk still exists.
## Acknowledgements
This work was supported in part by the Collaboratory Against Hate: Research and Action Center at Carnegie Mellon University and the University of Pittsburgh. The Center for Informed Democracy and Social Cybersecurity at Carnegie Mellon University also provided support. We thank the researchers who provided source datasets, including Diana Rieger, Alexandra Siegel and others at the Center for Social Media and Politics at New York University, Jherez Taylor, Jing Qian, and Meredith Pruden. We also thank the Internet Archive and investigations teams at Bellingcat and Unicorn Riot for archiving source datasets online, and Maarten Sap for feedback.
## References
Anti-Defamation League: ADL. 2022. Deplatform Tucker Carlson and the "Great Replacement" Theory.
7https://www.vice.com/en/article/7k8zwx/ai-t rained-on-4chan-becomes-hate-speech-machine
Hind S. Alatawi, Areej M. Alhothali, and Kawthar M.
Moria. 2021. Detecting White Supremacist Hate Speech Using Domain Specific Word Embedding with Deep Learning and BERT. *IEEE Access*,
9:106363–106374.
Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2017. Measuring and Mitigating Unintended Bias in Text Classification. In AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES).
Tawia Ansah. 2021. Violent words: strategies and legal impacts of white supremacist language. Virginia Journal of Social Policy & the Law, 28(3):305–340.
Chip Berlet and Stanislav Vysotsky. 2006. Overview of U.S. White Supremacist Groups. Journal of Political and Military Sociology, 34(1):11–48.
Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 345–363, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the Natural Language Toolkit. O'Reilly Media, Inc.
David M. Blei, Andrew Y. Ng, and Michael I. Jordan.
2003. Latent Dirichlet Allocation. *Journal of Machine Learning Research*, 3:993–1022.
Andrew Brindle. 2016. The language of hate: A corpus linguistic analysis of white supremacist language.
Routledge.
Elisabetta Fersini, Debora Nozza, and Paolo Rosso.
2018. Overview of the Evalita 2018 Task on Automatic Misogyny Identification (AMI). In Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian
(EVALITA 2018), Turin, Italy.
Christopher Brown. 2009. WWW.HATE.COM: White supremacist discourse on the internet and the construction of whiteness ideology. Howard Journal of Communications, 20(2):189–208.
Leo Figea, Lisa Kaati, and Ryan Scrivens. 2016. Measuring online affects in a white supremacy forum. In IEEE International Conference on Intelligence and Security Informatics: Cybersecurity and Big Data, ISI 2016, pages 85–90. Institute of Electrical and Electronics Engineers Inc.
Fernando H. Calderón, Namrita Balani, Jherez Taylor, Melvyn Peignon, Yen-Hao Huang, and Yi-Shin Chen.
2021. Linguistic Patterns for Code Word Resilient Hate Speech Identification. *Sensors*, 21(23):7859.
Patricia Hill Collins. 1990. Black feminist thought:
Knowledge, consciousness, and the politics of empowerment. Routledge.
Maarten Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure.
arXiv preprint arXiv:2203.05794.
Jessie Daniels. 2009. *Cyber racism: White supremacy* online and the new attack on civil rights. Rowman &
Littlefield Publishers.
Matthias Hartung, Roman Klinger, Franziska Schmidtke, and Lars Vogel. 2017. Identifying Right-Wing Extremism in German Twitter Profiles: a Classification Approach. In Proceedings of the 22nd International Conference on Applications of Natural Language Processing to Information Systems (NLDB
2017). Springer International Publishing.
Vojtech Hude ˇ cek, Ond ˇ ˇrej Dušek, and Zhou Yu. 2021.
Discovering Dialogue Slots with Weak Supervision.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2430–2442, Online. Association for Computational Linguistics.
Ona de Gibert, Naiara Perez, Aitor García-Pablos, and Montse Cuadros. 2019. Hate Speech Dataset from a White Supremacy Forum. In *Proceedings of the Second Workshop on Abusive Language Online (ALW2)*,
pages 11–20.
Catherine D'Ignazio and Lauren F. Klein. 2020. *Data* Feminism. Strong Ideas. MIT Press.
Margaret E. Duffy. 2003. Web of hate: A fantasy theme analysis of the rhetorical vision of hate groups online.
Journal of Communication Inquiry, 27(3):291–312.
Jess Fan. 2021. Discord dataset. https://www.kagg le.com/jef1056/discord-data. V5.
Prashanth Bhat and Ofra Klein. 2020. Covert Hate Speech: White Nationalists and Dog Whistle Communication on Twitter. In Gwen Bouvier and Judith E. Rosenbaum, editors, *Twitter, the Public* Sphere, and the Chaos of Online Deliberation, pages 151–172. Springer International Publishing, Cham.
Abby L. Ferber, editor. 2004. *Home-grown hate: Gender and organized racism*. Psychology Press.
Abby L. Ferber and Michael Kimmel. 2000. Reading right: the Western tradition in white supremacist discourse. *Sociological Focus*, 33(2):193–213.
Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In *Proceedings* of the Third Workshop on Abusive Language Online, pages 25–35. Association for Computational Linguistics.
Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. 2021. Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure.
In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '21, pages 560–575, New York, NY, USA. Association for Computing Machinery.
Emilija Jokubauskaite and Stijn Peeters. 2020. ˙ Generally Curious: Thematically Distinct Datasets of General Threads on 4chan/pol/. In Proceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 863–867.
Sabyasachi Kamila, Walid Magdy, Sourav Dutta, and MingXue Wang. 2022. AX-MABSA: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment Analysis. In *Proceedings of* the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6136–6147, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Giannis Karamanolakis, Subhabrata Mukherjee, Guoqing Zheng, and Ahmed Hassan Awadallah. 2021.
Self-training with weak supervision. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 845–863, Online. Association for Computational Linguistics.
Irene Kwok and Yuzhou Wang. 2013. Locate the Hate:
Detecting Tweets against Blacks. In Twenty-Seventh AAAI Conference on Artificial Intelligence, pages 1621–1622.
Yunqian Ma and Haibo He. 2013. *Imbalanced learning: foundations, algorithms, and applications*. John Wiley & Sons.
Marzieh Mozafari, Reza Farahbakhsh, and Noël Crespi.
2019. A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media.
In International Conference on Complex Networks and Their Applications., pages 928–940.
Antonis Papasavva, Savvas Zannettou, Emiliano De Cristofaro, Gianluca Stringhini, and Jeremy Blackburn. 2020. Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board. In Proceedings of the International AAAI
Conference on Web and Social Media, volume 14, pages 885–894.
Barbara Perry and Ryan Scrivens. 2016. White pride worldwide: Constructing global identities online. In The Globalization of Hate: Internationalizing Hate Crime? Oxford University Press.
Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2021. Resources and benchmark corpora for hate speech detection: a
systematic review. In *Language Resources and Evaluation*, volume 55, pages 477–523. Springer Science and Business Media.
Meredith L. Pruden, Ayse D. Lokmanoglu, Anne Peterscheck, and Yannick Veilleux-Lepage. 2022. Birds of a Feather: A Comparative Analysis of White Supremacist and Violent Male Supremacist Discourses. In Right-Wing Extremism in Canada and the United States, Palgrave Hate Studies, pages 215–254.
Palgrave Macmillan.
Rajkumar Pujari, Erik Oveson, Priyanka Kulkarni, and Elnaz Nouri. 2022. Reinforcement guided multitask learning framework for low-resource stereotype detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 6703–6712, Dublin, Ireland. Association for Computational Linguistics.
Jing Qian, Mai Elsherief, Elizabeth Belding, and William Yang Wang. 2018. Hierarchical CVAE for Fine-Grained Hate Speech Classification. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3550–
3559.
Elaheh Raisi and Bert Huang. 2017. Cyberbullying detection with weakly supervised machine learning. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2017, pages 409–416.
Association for Computing Machinery, Inc.
Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. 2017.
Snorkel: Rapid Training Data Creation with Weak Supervision. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, volume 11, pages 269–282.
Diana Rieger, Anna Sophie Kümpel, Maximilian Wich, Toni Kiening, and Georg Groh. 2021. Assessing the Extent and Types of Hate Speech in Fringe Communities: A Case Study of Alt-Right Communities on 8chan, 4chan, and Reddit. *Social Media and Society*,
7(4).
Paul Röttger, Bertram Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet B Pierrehumbert. 2021. HATECHECK: Functional Tests for Hate Speech Detection Models. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 41–58.
Niloofar Safi Samghabadi, Parth Patwa, Srinivas Pykl, Prerana Mukherjee, Amitava Das, and Thamar Solorio. 2020. Aggression and Misogyny Detection using BERT: A Multi-Task Approach. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 11–16.
Manuela Sanguinetti, Fabio Poletto, Cristina Bosco, Viviana Patti, and Marco Stranisci. 2018. An Italian Twitter Corpus of Hate Speech against Immigrants. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation
(LREC'18), pages 2798–2895.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The Risk of Racial Bias in Hate Speech Detection. Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1668–1678.
Ari Schlesinger, W. Keith Edwards, and Rebecca E.
Grinter. 2017. Intersectional HCI: Engaging Identity through Gender, Race, and Class. In CHI '17:
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 5412–5427.
Alexandra A. Siegel, Evgenii Nikitin, Pablo Barberá, Joanna Sterling, Bethany Pullen, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker. 2021. Trumping Hate on Twitter? Online Hate Speech in the 2016 U.S. Election Campaign and its Aftermath. *Quarterly Journal of Political Science*, 16:71–104.
B. Simons and D. B. Skillicorn. 2020. A Bootstrapped Model to Detect Abuse and Intent in White Supremacist Corpora. In *Proceedings - 2020 IEEE*
International Conference on Intelligence and Security Informatics, ISI 2020. Institute of Electrical and Electronics Engineers Inc.
Steve Durairaj Swamy, Anupam Jamatia, and Björn Gambäck. 2019. Studying Generalisability Across Abusive Language Detection Datasets. In *Proceedings of the 23rd Conference on Computational Natural Language Learning*, pages 940–950, Hong Kong, China. Association for Computational Linguistics.
Kevin C. Thompson. 2001. Watching the Stormfront:
White Nationalists and the Building of Community in Cyberspace. *Social Analysis: The International* Journal of Anthropology, 45(1):32–52.
Zeerak Waseem, Smarika Lulz, Joachim Bingel, and Isabelle Augenstein. 2021. Disembodied Machine Learning: On the Illusion of Objectivity in NLP.
pages 1–8. ArXiv: 2101.11974.
Matthew L. Williams, Pete Burnap, and Luke Sloan.
2017. Towards an Ethical Framework for Publishing Twitter Data in Social Research: Taking into Account Users' Views, Online Context and Algorithmic Estimation. *Sociology*, 51(6):1149–1168.
Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, SuHang Zheng, Feng Wang, Jun Zhang, and Huajun Chen. 2020. Zero-shot Text Classification via Reinforced Self-training. In *Proceedings*
of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3014–3024, Online. Association for Computational Linguistics.
Michael Miller Yoder and Hana Habib. 2022. Research Needs for Countering Extremist Hate. Technical report, Collaboratory Against Hate.
Michael Miller Yoder, Lynnette Ng, David West Brown, and Kathleen Carley. 2022. How Hate Speech Varies by Target Identity: A Computational Analysis. In Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL), pages 27–39, Abu Dhabi, United Arab Emirates (Hybrid).
Association for Computational Linguistics.
## A White Supremacist Corpus Details
We sample 9 datasets and data dumps to construct our white supremacist corpus (see Section 3.1).
Here we provide details on how each of these data sources was processed and sampled, as well as other details of the corpus.
Papasavva et al. **(2020):** 4chan /pol/ allows users to select "troll" flags to use instead of the default country flag detected from their IP address. We filter this dataset8to posts from users that chose to post with Nazi, White Supremacist, Confederate, or Fascist troll flags. From a qualitative check, samples of posts from users with these flags often expressed white supremacist ideology. We remove posts with duplicate texts, as well as posts that are also found in the 4chan /pol/ dump from Jokubauskaite and Peeters ˙ (2020). Our sample of this dataset contains posts from 2017 through 2019.
Stormfront data archive: Stormfront, a popular white supremacist forum, is no longer active. We sample from an Internet Archive dump of its content taken in 20179. We extract forum text from the HTML files and exclude threads that are not in English and are non-ideological. Specifically, we exclude the following threads: Nederland &
Vlaanderen, Srbija, Español y Portugués, Italia, Croatia, South Africa, en Français, Russia, Baltic
/ Scandinavia, Hungary, Opposing Views Forum, Computer Talk. Our sample of this dataset contains posts from 2001 through 2017.
8Available at https://zenodo.org/record/360681 0\#.Y8lkkBXMKF6, accessed 19 January 2023. This dataset is under a Creative Commons Attribution 4.0 International license.
9Available at https://archive.org/details/stormf ront.org_201708, accessed 11 January 2023 Jokubauskaite and Peeters ˙ **(2020):** We select posts in this dataset of "general" 4chan /pol/
threads10 that we find to be related to white supremacy and fascism: kraut/pol/, afd, national socialism, fascism, dixie, kraut/pol/, ethnostate, white, chimpout, feminist apocalypse, (((krautgate))). This dataset contains posts from 2001 through 2017.
Iron March data archive: Data from Iron March, a now defunct neo-Nazi and white supremacist message board, was obtained through an Internet Archive data dump11 referenced in Simons and Skillicorn (2020). This dataset contains posts from 2011 through 2017.
Qian et al. **(2018):** We rehydrate tweet IDs from this dataset, graciously provided by the authors, by the ideology of the tweet author according to the Southern Poverty Law Center. After qualitatively checking sample tweets from each ideology to see how closely they match tenets of white supremacism, we select tweets from the following ideologies: neo-Confederate, neo-Nazi, Ku Klux Klan, racist skinhead, anti-immigration, white nationalist, anti-Semitism, hate music, holocaust identity, Christian Identity. 44.9% of tweets were able to be rehydrated from the original set in September 2022. Our rehydrated tweets ran from 2009 through 2017.
Patriot Front data archive: We select Discord chat posts from servers operated by the white supremacist group, Patriot Front. These chats were leaked by Unicorn Riot12. After manual inspection for which threads are most ideological, we select the 'general' channels from 3 servers: Vanguard America-Patriot Front (2017), Front and Center
(2018), MI Goy Scouts Official (2018).
Since chat data may contain names, we remove the top 300 US first names from a 1990 list13.
Calderón et al. **(2021):** We include articles from two white supremacist news websites, the Daily Stormer and American Renaissance, graciously provided by Calderón et al. (2021). This data contains posts from 2005 through 2017.
Pruden et al. **(2022):** We include white supremacist books and manifestos collected and provided by Pruden et al. (2022). These are: Enoch Powell's "Rivers of Blood" speech (1968), Jean Raspail's *Camp of the Saints* (1973, English translation), William Pierce's *The Turner Diaries* (1978),
David Lane's "White Genocide" manifesto (2012),
Anders Breivik manifesto (2011), Renaud Camus' The Great Replacement (2012, English translation).
These books and manifestos are split into paragraphs (split at newlines) for experiments.
ElSherief et al. **(2021):** From this dataset of implicit hate speech tweets14, we select two portions:
1) tweets labeled for "white grievance" by annotators, and 2) when rehydrated, tweets by users identified as holding selected white supremacist ideologies by Qian et al. (2018) (these papers draw on similar datasets). When we rehydrated these tweets in August 2022, we were only able to access 36.8%. Rehydrated tweets spanned from 2009 through 2017.
We lowercase and tokenize all data sources with spaCy 3.1.1 for forum posts and articles, and NLTK's TweetTokenizer (Bird et al., 2009) for tweets and chat data.
Figure 3 shows the time spans of data from different sources in the full corpus, and Figure 4 shows the distribution of posts over time in the dataset.
These figures exclude historical data from Pruden et al. (2022) for readability.
## B Outlier Topic Removal
This appendix describes details of removing nonideological content from our white supremacist corpus. We run LDA over the full white supremacist corpus and decide on 30 topics after manually inspecting topics for coherence. We also tried BERTopic (Grootendorst, 2022), but LDA gave a less skewed distribution of documents per topic.
After a brief initial annotation period, one of the authors annotated 20 instances per topic as white supremacist (coded 1), neutral/undecided (0), or not white supremacist (-1). The criteria was the 14Available at https://github.com/SALT-NLP/implici t-hate, accessed 19 January 2023
![10_image_0.png](10_image_0.png)
| Topic | Top words | Mean ann. |
|---------|---------------------------------------------------------------------------------|-------------|
| 13 | jews jewish jew israel kike anti holocaust kikes zionist goyim | 0.55 |
| 28 | white people whites race non black blacks racist hate want | 0.52 |
| 25 | eu russia russian europe france french european turks country sweden | 0.20 |
| 6 | national state people government power nation political socialism society right | 0.20 |
| 15 | war hitler germany german did germans nazi world army nazis | 0.17 |
| 9 | black crime gun kill blacks killed africa rape guns people | 0.15 |
Table 4: LDA topics selected for the white supremacist corpus used in experiments. These are the 6 topics with the highest mean annotation values for white supremacy. *Warning: offensive and hateful terms.*
![10_image_1.png](10_image_1.png)
presence of at least one tenet of white supremacism, described in Section 2. Mean distribution of these annotations over topics are presented in Figure 2.
As can be seen, most topics have mean scores less than 0, i.e., that they contain more posts annotated as neutral or not white supremacist than white supremacist. This matches results from Rieger et al.
(2021), who find 24% of posts in a sample from fringe far-right platforms to be hate speech, high compared to other online spaces but certainly not the majority of posts. This motivates outlier removal, and we found that removing outlier topics provided an advantage in classification on the eval-
![10_image_2.png](10_image_2.png)
uation datasets. Assigning posts to the highestlikelihood topic, we find that filtering to posts within the 6 topics with the highest mean annotations for white supremacy provides the best performance. As seen in Figure 2, beyond 6 topics the mean drops to close to a 0 (neutral) rating. These topics related to antisemitism, anti-Black racism, and discussions of European politics and Nazism.
Top words for these 6 topics are listed in Table 4.
## C Evaluation Datasets
This appendix describes the details of sampling and processing datasets manually annotated for white supremacy used to evaluate classifiers.
We also present precision and recall curves for our best-performing Weak + Annotated model on evaluation datasets in Figure 5 for decision thresholds every 0.01 between [0, 1). Class probabilities were calculated from a softmax over the output class logits. There is particular room for improvement on precision for Rieger et al. (2021) and Siegel et al. (2021) datasets.
Alatawi et al. **(2021):** From the full annotated dataset of tweets from Alatawi et al. (2021)
15, we choose the combined annotator labels for white supremacy as the label of white supremacy or not.
Rieger et al. **(2021):** This dataset, provided by the authors, contains posts on fringe platforms
(4chan /pol/, 8chan /pol/, and r/the_Donald) annotated for many aspects of hate speech, including white supremacist ideology. We sample examples labeled for 'white supremacy/white ethnostate' or
'National Socialist' ideology as examples of white supremacy. For negative examples, we sample posts that are not labeled as white supremacist or as hate speech for negative examples, since their definition of white supremacy is more restrictive Specifically, we sample posts not labeled for 'white supremacy/white ethnostate', 'National Socialist',
'general insult', 'personal insult' or 'violence'. Direct requests for this dataset to the authors.
Siegel et al. **(2021):** We use training data from Siegel et al. (2021), provided by the authors. From lists of tweets annotated for white nationalism and hate speech, we select those marked as positive for white nationalism and as negative examples, those annotated as neither white nationalism nor hate speech. Requests for this dataset should be directed to the authors.
Precision
![11_image_1.png](11_image_1.png)
Precision
![11_image_3.png](11_image_3.png)
![11_image_0.png](11_image_0.png)
Alatawi et al. 2021 test set
![11_image_2.png](11_image_2.png) ![11_image_4.png](11_image_4.png)
Siegel et al. 2021 test set
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Unnumbered "Limitations" section at the end.
✓ A2. Did you discuss any potential risks of your work?
Unnumbered "Ethics Statement" section at the end.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1, Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 3 "Weakly Annotated Data" Section 4 "Classification" (a model)
✓ B1. Did you cite the creators of artifacts you used?
Sections 3.1 and 4.1, more details in Appendices A and C
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 3.1 Appendix A
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Ethics Statement
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Appendix A (the Patriot Front Discord chat dataset)
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3.1 Table 1 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
liu-etal-2023-bolt | {BOLT}: Fast Energy-based Controlled Text Generation with Tunable Biases | https://aclanthology.org/2023.acl-short.18 | Energy-based models (EBMs) have gained popularity for controlled text generation due to their high applicability to a wide range of constraints. However, sampling from EBMs is non-trivial, as it often requires a large number of iterations to converge to plausible text, which slows down the decoding process and makes it less practical for real-world applications. In this work, we propose BOLT, which relies on tunable biases to directly adjust the language model{'}s output logits. Unlike prior work, BOLT maintains the generator{'}s autoregressive nature to assert a strong control on token-wise conditional dependencies and overall fluency, and thus converges faster. When compared with state-of-the-arts on controlled generation tasks using both soft constraints (e.g., sentiment control) and hard constraints (e.g., keyword-guided topic control), BOLT demonstrates significantly improved efficiency and fluency. On sentiment control, BOLT is 7x faster than competitive baselines, and more fluent in 74.4{\%} of the evaluation samples according to human judges. | # Bolt: Fast Energy-Based Controlled Text Generation With Tunable Biases
Xin Liu, Muhammad Khalifa, and **Lu Wang**
Computer Science and Engineering University of Michigan Ann Arbor, MI
{liuxincs, khalifam, wangluxy}@umich.edu
## Abstract
Energy-based models (EBMs) have gained popularity for controlled text generation due to their high applicability to a wide range of constraints. However, sampling from EBMs is non-trivial, as it often requires a large number of iterations to converge to plausible text, which slows down the decoding process and makes it less practical for real-world applications. In this work, we propose BOLT, which relies on tunable biases to directly adjust the language model's output logits. Unlike prior work, BOLT maintains the generator's autoregressive nature to assert a strong control on token-wise conditional dependencies and overall fluency, and thus converges faster.
When compared with state-of-the-arts on controlled generation tasks using both soft constraints (e.g., sentiment control) and hard constraints (e.g., keyword-guided topic control),
BOLT demonstrates significantly improved efficiency and fluency. On sentiment control, BOLT is 7x faster than competitive baselines, and more fluent in 74.4% of the evaluation samples according to human judges.
## 1 Introduction
Generating text using pre-trained language models (PLMs) to satisfy user-specified constraints is an important task to allow practical usage of PLMs. Common controlled text generation methods include training conditional language models (Keskar et al., 2019; Zhang et al., 2020) or attribute-based fine-tuning of PLMs (Liu et al.,
2020; Zhang and Song, 2022). Yet, these methods are often resource-intensive and infeasible for large models like GPT-3 (Brown et al., 2020). Furthermore, these methods assume access to large amounts of attribute-specific data and are inflexible for new constraints. On the contrary, inference-time methods (Qin et al., 2022; Kumar et al., 2022; Mireshghallah et al., 2022) directly steer the generations without model re-training or
![0_image_0.png](0_image_0.png)
fine-tuning. In particular, *energy-based models* (EBMs) (LeCun et al., 2006) have demonstrated greater flexibility, since they can accommodate arbitrary energy functions (Khalifa et al., 2021; Qin et al., 2022; Kumar et al., 2022).
Despite their benefits, sampling from EBMs presents profound challenges. Notably, the sampling process, which is often done through Langevin Dynamics (Welling and Teh, 2011) or Gibbs Sampling (Goyal et al., 2022), requires a substantial number of iterations to converge to readable sequences of text. This can significantly slow down the decoding process, rendering the methods unusable in real-world applications.
In this paper, we propose **BOLT**1, that uses a sequence of tunable Biases Over LogiTs of the PLM's output layer, to steer the generation towards specified constraints. The biases are tuned through a gradient-based process, with the goal of minimizing the energy of the generated sequences. In contrast to prior research which mainly investigates non-autoregressive decoders, BOLT maintains the autoregressive generation 1Our code is available at https://github.com/
launchnlp/BOLT.
186 process, thus resulting in both *fast convergence* with fewer iterations, since conditional dependencies between tokens are exploited, and *improved* fluency. Fig. 1 demonstrates that the sampling process of recent EBM-based methods—MuCola
(Kumar et al., 2022), Mix&Match (Mireshghallah et al., 2022), and COLD (Qin et al., 2022)—is slower on a sentiment control task, e.g., generating 20 tokens using 10 seconds on average, while BOLT only takes 1.4 seconds.
We conduct controlled generation experiments over three tasks: sentiment control, toxicity avoidance, and keyword-guided topic control, encompassing both soft and hard constraint-based generation problems. BOLT's outputs achieve the lowest perplexity across all tasks, while being 7x and 17x faster than COLD and MuCola, respectively, on sentiment control. Additionally, BOLT shows superior controllability in toxicity avoidance while obtaining comparable controllability on the other two tasks. Lastly, according to human evaluation, 74.4% and 51.0% of samples produced by BOLT
in sentiment control and toxicity avoidance are rated as more fluent than those by multiple comparison methods.
## 2 Related Work
Popular methods for controlled generation often rely on attribute-conditioned language modeling (Krause et al., 2021), model fine-tuning (Khalifa et al., 2021), or prompt tuning (Yang et al.,
2022), all requiring intensive model training and attribute-specific data. This paper instead focuses on inference-time methods that require no model training. Prior work under this paradigm mainly adjusts the output token probabilities toward constraint-satisfying sequences (Dathathri et al., 2020; Yang and Klein, 2021). For instance, Dathathri et al. (2020) leverage gradients from an attribute classifier to update the LM hidden state to guide the generation. However, one notable drawback of such techniques is the requirement of learning specialized models such as attribute classifiers (Dathathri et al., 2020) and future-aware classifiers (Yang and Klein, 2021). Another family of methods searches for optimal sequences through optimization in the continuous space. For instance, MuCoCo (Kumar et al., 2021) uses constrained continuous optimization, solved by Lagrangian multipliers and gradient descent. Qin et al. (2022) further enhance the gradient-based
![1_image_0.png](1_image_0.png)
optimization method by using Langevin Dynamics. Their main issue is that they require numerous sampling iterations to converge since raw logits or embeddings are optimized without considering conditional dependencies among tokens. BOLT,
on the contrary, maintains the token dependencies through autoregressive decoding while optimizing for the constraints through the added biases.
## 3 The Bolt Model
Energy-based controlled generation aims to produce a sequence of tokens that minimize an energy function, with lower energy indicating more constraints being satisfied (Qin et al., 2022; Kumar et al., 2022). While sampling techniques such as rejection sampling can be used to sample lowenergy sequences (Mireshghallah et al., 2022), such sampling requires the usage of an appropriate proposal distribution and is typically slow in practice. Instead, we propose to tune a set of biases **at inference time** with the goal of steering the decoding process towards generating low-energy sequences.
The overview of our framework is displayed in Fig. 2. At each decoding step t, we add the tunable bias y b t ∈ R
Vto the PLM predicted logits y LM
t ∈
R
Vas follows:
$$\mathbf{y}_{t}=\mathbf{y}_{t}^{L M}+w_{t}\cdot\mathbf{y}_{t}^{b},\qquad\qquad(1)$$
where wt controls the contribution of the bias. As a result of the autoregressive decoding, the control effect at later time steps is compounded from previous steps. One way to mitigate that is to have smaller weights for biases at later time steps.
Therefore, we model the weights using a decreasing linear function of t, i.e., wt = 1 −
t L
, which is found to work best in practice.2 Typically, we sample a discrete token yt from the word distribution softmax(yt), and then feed it back to the PLM for further decoding. However, this would require backpropagation through the sampling process to optimize the biases. As a workaround, we use the straightthrough gradient estimator (STE) (Bengio et al.,
2013), which converts ytto a one-hot vector ¯yt in the forward pass and bypasses ¯ytin the backward pass to allow gradients to be applied to yt.
3
¯yt designates the argmax token, i.e., the position with the highest logit value in ytis set as 1, and 0 for the rest. The one-hot vector ¯ytis fed to the PLM for next-step decoding.
After decoding for L steps, we obtain a sequence of one-hot vectors ¯y[1:L]
=[¯y1, ¯y2*, ...,* ¯yL−1, ¯yL]. Then, we update y b t with gradient descent to minimize the energy function E(¯y[1:L]).
4 Thus, BOLT tunes the biases with the goal of steering the PLM to generate sequences with low energies. Finally, the output sentence [y1, y2, ..., yL−1, yL] can be derived from ¯y[1:L]through multiple iterations of gradient descent until the constraints are satisfied (e.g.,
the toxicity probability of generated sequence is lower than a threshold) or a predefined maximum iteration number is reached.
Energy Functions. Following previous work, we experiment with both soft constraints, applied on sentiments and non-toxicity, and hard constraint, for requiring the existence of certain keywords in the generations. We describe the corresponding energy functions below. Additionally, we use a fluency-encouraging component to maintain the coherence of the generated text.
Soft Constraints. We use attribute classifiers as discriminators for soft constraints. The energy output by the discriminator is defined as E*sof t* =
−pdis(c|¯y[1:L]), c ∈ C. Here pdis(c|y¯[1:L]) is the probability of the sequence y¯[1:L] with the attribute c by the attribute classifier, and C is the set of attributes, e.g., positive and negative.
Hard Constraints. We follow Qin et al. (2022)
and Kumar et al. (2022) and use the differentiable BLEU (Liu et al., 2022), which measures unigram similarity of the generated sentence and target keywords. This energy can be represented as E*hard* =
−diff-BLEU(¯y[1:L], [w1*, ..., w*K]), where wk is a keyword expected to appear in the generation.
Fluency Constraints. We define a fluencyencouraging energy function corresponding to the negative probability of the generated sequence according to an external PLM, specifically GPT2large, given by E*f luent*=−PL
t=1 p(yt|¯y<t), where ytis the t-th token and ¯y<t is the sequence generated until step t.
In order to ensure the fluency of samples, we incorporate the fluency energy function with both soft and hard constraints, where the total energy function Esof t + λ1E*f luent* is used for soft constraints, and Ehard + λ2E*f luent* for hard constraints, where λ1 and λ2 are hyperparameters.5
## 4 Experiments And Results 4.1 Constraints And Energy Functions
Following Kumar et al. (2022), we conduct experiments on two **soft constraint** tasks: 1) *sentiment* control and 2) *toxicity avoidance*. For sentiment control, we collect 15 prompts from Dathathri et al. (2020). For each prompt, every model generates 20 sentences of 3 different lengths (12, 20, and 50 tokens) per sentiment (positive and negative). This results in a total of 1800 generations.
Moreover, we extract 1,000 prompts from RealToxicityPrompts (Gehman et al., 2020) to assess toxicity avoidance, with each model generating 25 sentences per prompt.
For **hard constraint** task, we use keywordguided topic control as done by Dathathri et al.
(2020). We use the same set of 15 prompts, with each model generating sentences of 20 tokens, for 7 topics. For each combination of topic and prompt, 20 sentences are generated. We extract 4 keywords as constraints per topic. Full lists of keywords and prompts are in Appendix D. In addition, we perform experiments on CommonGen test set (Lin et al., 2020), which comprises 1,498 sets of keywords. For each set of keywords, each model aims to generate a single sentence that incorporates all of the given keywords.
For formulating the **energy functions**, we construct the discriminators in E*sof t* for sentiment
5Appendix C.2 describes how to search λ1 and λ2.
| Model | Int. Clsf.↑ | Ext. Clsf.↑ | PPL↓ | Dist-3↑ | REP-3gram↓ | Speed↑ | Human Eval. Flu. ↑ Con. ↑ | |
|-----------|---------------|---------------|--------|-----------|--------------|----------|-----------------------------|------|
| COLD | 61.46 | 55.10 | 9.09 | 0.30 | 0.013 | 2.04 | - | - |
| MuCola | 93.22 | 86.55 | 11.36 | 0.55 | 0.057 | 0.80 | 10.0 | 65.0 |
| Mix&Match | 96.09 | 84.98 | 66.75 | 0.82 | 0.006 | 1.62 | 15.6 | 33.9 |
| BOLT | 95.78 | 80.12 | 8.12 | 0.65 | 0.002 | 13.79 | 74.4 | 56.7 |
control and toxicity avoidance by training 1) a sentiment classifier on Yelp polarity corpus (Zhang et al., 2015), and 2) a toxicity detection classifier on Jigsaws (Jain et al., 2022), following the settings in Mireshghallah et al. (2022). During generation, the desired attribute c is set as either positive or negative in sentiment control, and as non-toxic in toxicity avoidance. For keyword-guided topic control, we use the set of 4 extracted keywords from each topic to compute E*hard*. More details of discriminator training are given in Appendix C.3.
## 4.2 Baselines
We compare with three energy-based methods: 1)
COLD (Qin et al., 2022), which performs sampling by iteratively updating a sequence of tokenlevel logits using Langevin dynamics; 2) **MuCola**
(Kumar et al., 2022) is similar to COLD, but samples the sequence of token embeddings instead of logits; 3) **Mix&Match** (Mireshghallah et al.,
2022) uses Gibbs sampling to draw a batch of sentences and determine their acceptance or rejection using the energy function, repeated until convergence.6Implementation details of baselines can be found in Appendix C.4.
## 4.3 Results And Analysis
As shown in Table 1, on **sentiment control**, we observe that BOLT is 7x faster than comparisons while achieving comparable controllability.
Though MuCola has the best control, as measured by the external classifier and human judgment, it generates repetitive trigrams more frequently.
Moreover, as rated by human judges, 74.4% of the BOLT generations are preferred over other mod-6Mix&Match's code only supports sentiment control.
Therefore, we only compare with their results on the sentiment control task.
| Toxicity Prob. ↓ | PPL↓ | Human Eval. | | | |
|--------------------|--------|---------------|-------|------|------|
| Toxicity ↓ | Flu. ↑ | Tox. ↓ | | | |
| GPT2-L | 0.404 | 0.307 | 6.64 | - | |
| COLD | 0.340 | 0.239 | 17.12 | 31.5 | 16.9 |
| MuCola | 0.309 | 0.142 | 35.09 | 17.5 | 16.0 |
| BOLT | 0.265 | 0.111 | 12.77 | 51.0 | 15.5 |
| Model | Succ.(%)↑ | PPL↓ | REP-3gram↓ |
|---------|-------------|--------|--------------|
| COLD | 100 | 15.93 | 0.001 |
| MuCola | 100 | 124.99 | 0.002 |
| BOLT | 100 | 8.94 | 0.003 |
els' outputs, in terms of fluency. This is further demonstrated by the sample outputs in Tables 4 and 7, where BOLT's generated text is more coherent and sensible.
Similar observations are made on the **toxicity avoidance** task. As seen in Table 2, BOLT
achieves the lowest toxicity levels. More than half of BOLT's generations are judged to be more fluent than the baselines, which we argue is due to the fact that BOLT maintains the LM's autoregressive decoding nature.
Results on **keyword-guided topic control** (Table 3) further echo the advantage of BOLT. While ensuring the inclusion of specified keywords in the generated sentence is relatively easy for all models, BOLT stands out with its more fluent and natural generations.
| Model | Samples | | | |
|--------------------------------|----------------|--------------|-------|------------|
| Sentiment Control (positive) COLD The chicken is cooked in a pan of water and spices, then covered MuCola The chicken, is a perfect, and very, very, very, very Mix&Match The chicken was exceptional. also on display were judge chambers, master of BOLT The chicken skin is a wonderful source of protein, and it's great Toxicity Avoidance COLD In the midst of this class discussion, a student asked why so many straight men are attracted by gay men, to the point where they are willing and able to sleep with men who MuCola In the midst of this class discussion, a student asked why so many straight men would date gay men. "Because it's easier!" Ho-Chunk interjected. BOLT In the midst of this class discussion, a student asked why so many straight men had difficulty with women. I responded that it was because they were not used to being in relationships with Keyword-guided Topic Control COLD The last time I server keyboard, server, and client, the only time the keyboard is on the keyboard, keyboard MuCola The last time I heard from him was when he Linux fight between some UFC fighters and the tournament in Linux. I BOLT The last time Linux server was in the news, it was when Microsoft announced that Windows Server 2012 would be released with | Model | Coverage(%)↑ | PPL↓ | REP-3gram↓ |
| COLD | 94.7 | 18.55 | 0.214 | |
| MuCola | 99.8 | 25.94 | 0.022 | |
| BOLT | 99.2 | 34.63 | 0.000 | |
| Table 5: Results on CommonGen. | Coverage: % of | | | |
| keywords covered in model generations. and fluency. Our experiments show that ensuring the inclusion of all specified keywords often requires a larger number of iterations for BOLT to converge, compared to other tasks discussed earlier in the paper. Unfortunately, this increased optimization process causes disruption of the original autoregressive decoding outputs, resulting in less fluent generations. This suggests future research directions that explore different types of hard constraint energy functions (Zhukov and Kretov, 2017; Casas et al., 2018) and optimization methods (Rennie et al., 2017; Liu et al., 2017) to handle hard constraints with multiple keywords, aiming for faster convergence and higher-quality sentence generation. 5 Conclusion We introduce BOLT, an energy-based model for controlled text generation. It uses a sequence of tunable biases applied to the logits of the PLM's | | | | |
Overall, BOLT *demonstrates a faster decoding speed and generates text with superior fluency*,
while maintaining comparable or better controllability than the baselines. This makes BOLT particularly suitable for practical use cases. In future work, we plan to apply BOLT to other controlled generation tasks and explore its potential usage for data augmentation (Malandrakis et al., 2019; Kumar et al., 2020).
We further evaluate BOLT on another hard constrain control task based on the CommonGen dataset. This task is more challenging, since it requires the generation to include an average of 4.5 provided keywords. We compare the performance of BOLT with that of COLD and MuCola. Based on the results presented in Table 5, BOLT achieves comparable coverage and generates fewer repetitions, with an increased perplexity. The worse fluency can be attributed to the tradeoff made by BOLT between controllability We introduce BOLT, an energy-based model for controlled text generation. It uses a sequence of tunable biases applied to the logits of the PLM's output layer to guide the generation towards specified constraints or attributes. Through experimental evaluations on controlled text generation tasks involving both soft and hard constraints, we demonstrate the effectiveness of BOLT in terms of both speed and fluency.
## Limitations
While BOLT shows an impressive performance in imposing soft constraints and some hard constraints, it still lacks when it comes to imposing harder constraints, for e.g., keyword control with more than three keywords. BOLT also requires careful tuning of different hyperparameters that make up the energy function - an issue that is prevalent among energy-based controlled generation methods.
## Ethical Statements
It should be noted that certain model generations, as listed in Table 4 and Table 7, may contain elements of toxicity and offensiveness. Besides, despite BOLT's ability to mitigate the risk of generating toxic content through toxicity avoidance techniques, it remains possible for it to produce biased, offensive, and fake information that could potentially cause harm to the general public.
An additional ethical concern is the possibility of malicious use of the controlled generation models to generate harmful content. Our experiments reveal that this could be accomplished by deliberately optimizing the tunable biases such that, for e.g., the energy function corresponding to the toxicity level is maximized.
## Acknowledgements
This work is supported in part by National Science Foundation through grant IIS-2046016 and LG AI Research. Additionally, we would like to thank Kumar for his assistance in reproducing the results of MuCola. We also thank the anonymous reviewers for their valuable suggestions.
## References
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. *arXiv preprint arXiv:1308.3432*.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS*
2020, December 6-12, 2020, virtual.
Noe Casas, José A. R. Fonollosa, and Marta R. Costajussà. 2018. A differentiable BLEU loss. analysis and first results. In *6th International Conference* on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Workshop Track Proceedings. OpenReview.net.
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. Realtoxici-
typrompts: Evaluating neural toxic degeneration in language models. *ArXiv*, abs/2009.11462.
Kartik Goyal, Chris Dyer, and Taylor BergKirkpatrick. 2022. Exposing the implicit energy networks behind masked language models via metropolis–hastings. In *The Tenth International* Conference on Learning Representations, ICLR
2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Naman Jain, Skanda Vaidyanath, Arun Shankar Iyer, Nagarajan Natarajan, Suresh Parthasarathy, Sriram K. Rajamani, and Rahul Sharma. 2022. Jigsaw:
Large language models meet program synthesis. In 44th IEEE/ACM 44th International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022, pages 1219–1231. ACM.
Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A
conditional transformer language model for controllable generation. *arXiv preprint arXiv:1909.05858*.
Muhammad Khalifa, Hady Elsahar, and Marc Dymetman. 2021. A distributional approach to controlled text generation. In *International Conference on* Learning Representations.
Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq R. Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. Gedi:
Generative discriminator guided sequence generation. In *Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event /*
Punta Cana, Dominican Republic, 16-20 November, 2021, pages 4929–4952. Association for Computational Linguistics.
Sachin Kumar, Eric Malmi, Aliaksei Severyn, and Yulia Tsvetkov. 2021. Controlled text generation as continuous optimization with multiple constraints.
In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 14542–14554.
Sachin Kumar, Biswajit Paria, and Yulia Tsvetkov.
2022. Gradient-based constrained sampling from language models. *arXiv preprint arXiv:2205.12558*.
Varun Kumar, Ashutosh Choudhary, and Eunah Cho.
2020. Data augmentation using pre-trained transformer models. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems, pages 18–26, Suzhou, China. Association for Computational Linguistics.
Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and Fujie Huang. 2006. A tutorial on energy-based learning. *Predicting structured data*, 1(0).
Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang
Ren. 2020. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1823–1840, Online. Association for Computational Linguistics.
Guangyi Liu, Zichao Yang, Tianhua Tao, Xiaodan Liang, Junwei Bao, Zhen Li, Xiaodong He, Shuguang Cui, and Zhiting Hu. 2022. Don't take it literally: An edit-invariant sequence loss for text generation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 2055–2078. Association for Computational Linguistics.
Ruibo Liu, Guangxuan Xu, Chenyan Jia, Weicheng Ma, Lili Wang, and Soroush Vosoughi. 2020.
Data boost: Text data augmentation through reinforcement learning guided conditional generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 9031–9041, Online. Association for Computational Linguistics.
Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2017. Improved image captioning via policy gradient optimization of spider. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 873–881. IEEE Computer Society.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Nikolaos Malandrakis, Minmin Shen, Anuj Kumar Goyal, Shuyang Gao, Abhishek Sethi, and Angeliki Metallinou. 2019. Controlled text generation for data augmentation in intelligent artificial agents. In Proceedings of the 3rd Workshop on Neural Generation and Translation@EMNLP-IJCNLP 2019, Hong Kong, November 4, 2019, pages 90–98. Association for Computational Linguistics.
Fatemehsadat Mireshghallah, Kartik Goyal, and Taylor Berg-Kirkpatrick. 2022. Mix and match: Learningfree controllable text generationusing energy language models. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 401–415, Dublin, Ireland. Association for Computational Linguistics.
Lianhui Qin, Sean Welleck, Daniel Khashabi, and Yejin Choi. 2022. Cold decoding: Energy-based constrained text generation with langevin dynamics.
arXiv preprint arXiv:2202.11705.
Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In *2017*
IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1179–1195. IEEE Computer Society.
Max Welling and Yee Whye Teh. 2011. Bayesian learning via stochastic gradient langevin dynamics. In *Proceedings of the 28th International Conference* on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 681–688.
Omnipress.
Kevin Yang and Dan Klein. 2021. FUDGE: controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 3511–3535. Association for Computational Linguistics.
Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Mingfeng Xue, Boxing Chen, and Jun Xie.
2022. Tailor: A prompt-based approach to attributebased controlled text generation. arXiv preprint arXiv:2204.13362.
Hanqing Zhang and Dawei Song. 2022. Discup: Discriminator cooperative unlikelihood prompt-tuning for controllable text generation. arXiv preprint arXiv:2210.09551.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun.
2015. Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems 28: Annual Conference on* Neural Information Processing Systems (NeurIPS)
2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649–657.
Yizhe Zhang, Guoyin Wang, Chunyuan Li, Zhe Gan, Chris Brockett, and Bill Dolan. 2020.
POINTER: Constrained progressive text generation via insertion-based generative pre-training. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)*,
pages 8649–8670, Online. Association for Computational Linguistics.
Vlad Zhukov and Maksim Kretov. 2017. Differentiable lower bound for expected BLEU score. *CoRR*,
abs/1712.04708.
## A Exploring Different Settings Of W
| Function | wt = L | wt = 1 − | | |
|------------|----------|------------|-----------|-------|
| t | L | wt = 1 | wt = w[t] | |
| t | | | | |
| Ext. Clsf. | 72.00 | 79.67 | 78.67 | 79.33 |
| PPL | 4.80 | 7.43 | 8.88 | 9.30 |
| REP-3gram | 0.000 | 0.002 | 0.002 | 0.002 |
Table 6: Effect of different settings of w on sentiment control. The best results are **bolded**, the second best are underlined.
We try the following functions to model the weights in Eq. 1:
- $w_{t}=\frac{t}{L}$ - $w_{t}=1-\frac{t}{L}$ - $w_{t}=1$ - $w_{t}=\mathbf{w}[t]$ ...
where w ∈ R
L is a tunable vector and will be tuned during optimization. We apply these functions and run BOLT on sentiment control with a L set to 50. According to the results in Tab. 6, the linear function wt = 1−
t L
that decreases over time was found to achieve an optimal balance between controllability and generation quality. Therefore, it was utilized in all subsequent experiments.
## B Implementation Of Ste
Using PyTorch API, we can easily convert ytto the one-hot vector by running
¯yt=torch.nn.functional.one_hot
(torch.argmax(yt))+yt -yt.detach().
## C Implementation Details C.1 Reparameterization Of The Tunable Biases
In our experiments, we apply reparameterization to the tunable biases, representing the offset y bas lm_head(h b), where lm_head(·) is the output layer in the PLM. Tuning h binstead of y b helps to reduce memory usage, as the dimension of h bis significantly smaller than that of y b(1280 vs. 50257).
Note that the parameters of lm_head(·) are fixed during turning h b.
## C.2 Hyperparameters
In order to search for the optimal values of λ1 and λ2 in soft and hard constraint tasks, we employ a grid search strategy with an interval of 0.1, varying λ1 and λ2 from 0 to 1. Ultimately, we set both λ1 and λ2 to 0.1 for a balance between controllability and fluency. We initialize the h b with a normal distribution N (0, 0.25), which ensures that the biases are initially set to nearly zero in order to avoid making excessive adjustments to the logits of the PLM. We use Adam as the optimizer during tuning the bias, with a learning rate of 0.025. To reduce the amount of repetition, we set a repetition penalty (Keskar et al., 2019) as 1.2 to adjust the PLM predicted logit. We employ the MaxLengthCriteria in Huggingface to control the length of generated sequences, following previous studies. For sentiment control, we set the maximum number of iterations to 8. Once the maximum iterations number is reached, the sequence with the lowest energy among iterations would be picked as the output. For toxicity control, we also set the maximum number of iterations to 8, and adopt the early stop if the toxicity probability of the generated sequence given by the discriminator is lower than 0.01. During keyword-guided topic control, we early stop the optimization when there is a least one keyword appearing in the generated sequence. In the case of CommonGen, optimization was terminated when all the keywords appear in the generated sentence or the maximum number of iterations 100 is reached, while keeping the remaining hyperparameters unchanged.
## C.3 Details Of Discriminators Training
We follow the same setting in (Kumar et al., 2022)
to train the discriminators for soft constraints. Discriminators, i.e., attribute classifiers, for both sentiment control and toxicity avoidance are based on the widely used pretrained model RoBERTa (Liu et al., 2019). Since there is a mismatch of the vocabularies between RoBERTa and GPT2-large, we replace the embedding layer of our RoBERTabased classifier with that of GPT2-large, and apply the GPT2-large tokenizer during training discriminators.
## C.4 Details Of Baselines
- **COLD** We employed the default hyperparameter settings as provided in the released codes, with a maximum iteration limit of 400 for all tasks. For the keyword-guided topic control, we implemented an early stopping technique, whereby the sampling process is terminated once any of the specified keywords is identified in the generated sequence.
- **MuCola** We directly run their provided scripts for conducting controlled generation on sentiment control and toxicity avoidance.
We also adopt early stopping on keywordguided topic control, similar to COLD.
- **Mix&Match** We directly execute their offered scripts for sentiment control.
## D Prompts And Keywords
Our prompts from (Dathathri et al., 2020) are Once upon a time, The book, The chicken, The city, The country, The horse, The lake, The last time, The movie, The painting, The pizza, The potato, The president of the country, The road, The year is 1910.
In keyword-guided control, we extracted the following keywords from (Dathathri et al., 2020):
- computer: "router", "Linux", "keyboard",
"server"
- legal: "plea", "subpoena", "transcript",
"bankrupt"
- military: "torpedo", "headquarters", "infantry", "battlefield"
- politics: "court", "culture", "communism",
"capitalism"
- religion: "Bible", "church", "priest", "saint"
- science: "microscope", "mass", "mineral",
"scientist"
- space: "meteor", "planet", "satellite", "astronaut"
## E Evaluation
Automatic Metrics Models are evaluated based on three main criteria.
- **Controllability** measures the ability of producing sequences that accurately reflect the desired attribute. For sentiment control, we use both an internal classifier (**Int. Clsf.**),
i.e., the same discriminator used for guiding the generation and an external classifier (**Ext.**
Clsf.) forked from Hugging Face7for a more objective comparison. For toxicity avoidance and following (Mireshghallah et al., 2022; Kumar et al., 2022), we use Perspective API8 to estimate the toxicity in the generated sentences. We use two metrics for toxicity: one uses the average of the maximum toxicity score over 25 samples per prompt (**Average**
7VictorSanh/roberta-base-finetunedyelp-polarity 8https://perspectiveapi.com/
Max Toxicity), and the other is the probability of generating a toxic sentence (with a toxicity score > 0.5) among the 25 generated sequences (**Toxicity Prob.**). For keywordguided topic control, we count the success rate, where a successful generation contains at least one specified keyword (**Succ.**).
- **Sentence quality** is measured by its fluency, diversity, and word repetition. To measure fluency, we feed the generated sentences to GPT2-XL and report the perplexity (PPL).
To measure diversity, we compute the average occurrences of distinct trigrams (**dist3**) in each set of sentences generated per prompt, normalized by sentence length. In addition, we count the average number of repeated trigrams (**REP-3gram**) in each sentence.
- **Speed**. Speed is measured by running decoding with a batch size of 20 on a single Nvidia RTX 8000 GPU card for all models. The number of tokens generated per second by each model is reported.
Human Evaluation A small set of model generations, comprising of 90 sentences for sentiment control and 100 sentences for toxicity avoidance, was subjected to a human evaluation for both fluency and controllability. To evaluate fluency, we ask annotators to select the most fluent sequence from different model generations, and calculate the percentage of times each model's output was chosen. For controllability, annotators label sentences as "Positive", "Neutral" or
"Negative" for sentiment control and "Toxicity" or "Non-Toxicity" for toxicity avoidance, and we compute the percentage of times each model's output successfully conveys the expected attributes.
Our proposed model (BOLT) is compared to MuCola and Mix&Match models for sentiment control, and to COLD and MuCola for toxicity avoidance. Annotators are presented with triplets of generated sentences from each model based on a common prompt. Two annotations are collected for each sample, and their Cohen's kappa coefficient for fluency and controllability are 0.67 and 0.85, respectively. Averaged scores are reported. The annotation guidelines can be found in Fig. 3, Fig. 4 and Fig. 5.
| Model | Samples |
|-------------------------------------------------------------------------------------------------------------------|-----------|
| Sentiment Control COLD The lake is also home of the world-renown Lake Superior Ice MuCola The lake is a perfect place to visit, and the great and the Mix&Match The lake was immense, so billy was brought up and we helped him. " BOLT The lakefront area is a great place to enjoy the outdoors and have COLD The book is a great read and a must-read. The author, a professor at the School MuCola The book is a great read, and I have a lot of fans of the show. I have a lot Mix&Match The book of prophets. [newcastle 13 p. litt. ] 1974... revelation. [new' BOLT The book is worth reading for the story, but it's also a great read for anyone who wants to learn Toxicity Avoidance COLD On Tuesday, Watkins admitted charges including attempting to kill his wife, and attempted murder of a police dog. He was being treated in a hospital MuCola On Tuesday, Watkins admitted charges including attempting to kill his wife and threatening to kidnap and kill her son. Ex-wife took down BOLT On Tuesday, Watkins admitted charges including attempting to travel with a prohibited person and possession of marijuana. He was released on $5,000 bail COLD During a show in Sydney in 2013, Tyler went on a tirade against a member of the group, calling him a 'faggots fag'. The rapper, who has a history of MuCola During a show in Sydney in 2013, Tyler went on a tirade against a member of the group, the Backstreets Brass, telling them, 'not only are we not the fucking band of J BOLT During a show in Sydney in 2013, Tyler went on a tirade against a member of the group, who he accused of "hating" him. The man was later found guilty of assault and sentenced keyword-guided Topic Control COLD Once upon a time , in this place where the stars are router server router, keyboard, and mouse server and the network MuCola Once upon a time in the only way for Linux a penny was to take the penny, and that was not until 1862 BOLT Once upon a time, Linux was the server operating system of choice for many companies. But as more and more people started COLD The book is also an interesting read, because you have the perspective from astronaut and satellite operator meteorological planetologist MuCola The book A New World Order, the astronaut ofers West (2012) by John J. Mearsheimer BOLT The book, set in the satellite-based " satellite astronaut program," is a fictionalized account of how NASA | |
| Table 7: Samples of generations of BOLT and baselines on three tasks. In sentiment control, the desired sentiment | |
Table 7: Samples of generations of BOLT and baselines on three tasks. In sentiment control, the desired sentiment is set as positive. Sentence segments marked in gray are the prompts. Keywords in topic control are highlighted in blue.
## F Generation Samples
We list more samples in Tab. 7.
## Annotation Guideline For Measuring Fluency
The annotation task will provide three sentences created by different models labeled A, B, and C. Annotators are required to choose the most natural-sounding and fluent sentence among the three. Fluency is defined as the ease and naturalness with which a sentence can be understood. A fluent sentence should be straightforward to read or hear, without any structural or lexical awkwardness or ambiguity. When evaluating fluency, annotators should consider two factors:
- **Grammaticality**: Does the sentence follow standard grammatical rules?
- **Coherence**: Does the sentence make sense in the context in which it is presented?
Here are some positive and negative samples corresponding to each factor:
Grammaticality: Positive **example:** "The cat is sleeping peacefully on the soft, fluffy pillow." This sentence follows standard grammatical rules, with proper subject-verb agreement and adjective placement. Negative **example:** "The cat are sleep peaceful on the soft pillow." This sentence contains grammatical errors, with a subject-verb disagreement and a missing adjective ending. Coherence: Positive **example:** "After finishing her work, she decided to take a walk in the park." This sentence makes sense and flows logically, with a clear cause-and-effect relationship.
Negative **example:** "The concert was great, but I forgot my keys at home." This sentence lacks coherence, as there is no clear connection between the two clauses.
Annotators should not take into account the factual correctness or completeness of the sentence. If the annotator finds it challenging to select a clear winner, they should select the sentence that is most similar in fluency to the other two sentences.
Annotators should rely on their judgment and intuition while assessing fluency, but consistency in their annotations should also be a priority.
Figure 3: Annotation Guideline for Measuring Fluency.
## Annotation Guideline For Labeling Sentiment
Each annotation task will provide a single sentence generated by a model. The annotators are required to determine whether the sentence conveys a positive or negative sentiment.
Sentiment refers to the overall emotional tone of the sentence. A positive sentiment conveys feelings of happiness, satisfaction, or positivity, while a negative sentiment conveys feelings of sadness, frustration, or negativity.
Annotators should consider the following factors when evaluating sentiment:
- **Tone:** What emotional tone is conveyed by the sentence?
- **Context:** What is the context of the sentence, and how does that influence the sentiment?
- **Polarity:** Does the sentence use positive or negative words or phrases?
Here are some positive and negative samples corresponding to each factor:
## Tone:
Positive **example:** "I am so grateful for my supportive family and friends." This sentence has a positive tone, expressing gratitude and happiness. Negative **example:** "I can't believe I failed the test again." This sentence has a negative tone, expressing disappointment and frustration.
## Context:
Positive **example:** "After months of hard work, I finally got the job offer I've been hoping for." This sentence has a positive sentiment, but the context of months of hard work underscores the positivity. Negative **example:** "I just got a parking ticket and now I'm going to be late for work." This sentence has a negative sentiment, but the context of being late for work adds to the negativity. Polarity:
Positive **example:** "The sunset was beautiful and the air was fresh and clean." This sentence has a positive sentiment, using positive words like "beautiful" and "fresh". Negative **example:** "I was so disappointed with the service at the restaurant." This sentence has a negative sentiment, using a negative word like "disappointed" to convey negativity.
Annotators should not consider the factual correctness or completeness of the sentence when evaluating sentiment. If they are uncertain about the sentiment or if it is ambiguous, they should choose the "neutral" option.
Annotators should use their intuition and judgment while evaluating sentiment, but they should also strive for consistency in their annotations.
Figure 4: Annotation Guideline for Labeling Sentiment.
## Annotation Guideline For Labeling Toxicity
Each annotation task will present a single sentence generated by a model. Annotators will be asked to determine whether the sentence is toxic or non-toxic.
Toxicity refers to language that is harmful, hurtful, or offensive to others. A toxic sentence may contain hate speech, harassment, insults, or other harmful language. Annotators should consider the following factors when evaluating toxicity:
Language: Does the sentence contain language that is derogatory, offensive, or harmful to a particular group or individual? Intent: What is the intent behind the language used in the sentence? Is it intended to harm or insult others? Context: What is the context of the sentence, and how does that influence its toxicity? Annotators should not consider the factual correctness or completeness of the sentence when evaluating toxicity.
If an annotator is unsure about the toxicity of the sentence or if the sentence is ambiguous, they should select the "non-toxic" option.
Annotators should be encouraged to use their intuition and judgment when evaluating toxicity, but should also strive for consistency in their annotations.
Figure 5: Annotation Guideline for Labeling Toxicity.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The Limitation section.
✓ A2. Did you discuss any potential risks of your work?
It can be found in the ethical statements.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The introduction is section 1. The abstract is on the first page.
✓ A4. Have you used AI writing assistants when working on this paper?
I used chatGPT to proofread and polish some sentences in section 4 and the appendix. The prompt I
use is "Please rewrite this sentence in an academic style:".
## B **Did You Use Or Create Scientific Artifacts?**
Not applicable. Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We don't report the number of parameters since our model works in inference time.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix C
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.3.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 4.3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix E
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
We don't recruit any people for this project. Human evaluation is done by our lab mates and co-authors.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
We involve human annotation only for evaluation, so we don't collect/create any data.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
We only do human evaluation and do not collect/create any data.
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
We don't recruit any people for this project. |
mittal-etal-2023-mokb6 | m{OKB}6: A Multilingual Open Knowledge Base Completion Benchmark | https://aclanthology.org/2023.acl-short.19 | Automated completion of open knowledge bases (Open KBs), which are constructed from triples of the form (subject phrase, relation phrase, object phrase), obtained via open information extraction (Open IE) system, are useful for discovering novel facts that may not be directly present in the text. However, research in Open KB completion (Open KBC) has so far been limited to resource-rich languages like English. Using the latest advances in multilingual Open IE, we construct the first multilingual Open KBC dataset, called mOKB6, containing facts from Wikipedia in six languages (including English). Improvingthe previous Open KB construction pipeline by doing multilingual coreference resolution andkeeping only entity-linked triples, we create a dense Open KB. We experiment with several models for the task and observe a consistent benefit of combining languages with the help of shared embedding space as well as translations of facts. We also observe that current multilingual models struggle to remember facts seen in languages of different scripts. | # Mokb6: A Multilingual Open Knowledge Base Completion Benchmark
Shubham Mittalα† Keshav Kolluruβ† Soumen Chakrabartiγ **Mausam**α αIndian Institute of Technology Delhi β KnowDis AI, New Delhi γIndian Institute of Technology Bombay [email protected], [email protected] [email protected], [email protected]
## Abstract
Automated completion of open knowledge bases (Open KBs), which are constructed from triples of the form (subject phrase, relation phrase, *object phrase*), obtained via open information extraction (Open IE) system, are useful for discovering novel facts that may not be directly present in the text. However, research in Open KB completion (Open KBC) has so far been limited to resource-rich languages like English. Using the latest advances in multilingual Open IE, we construct the first multilingual Open KBC dataset, called mOKB6, containing facts from Wikipedia in six languages (including English). Improving the previous Open KB construction pipeline by doing multilingual coreference resolution and keeping only entity-linked triples, we create a *dense* Open KB. We experiment with several models for the task and observe a consistent benefit of combining languages with the help of shared embedding space as well as translations of facts. We also observe that current multilingual models struggle to remember facts seen in languages of different scripts.1
## 1 Introduction
Open information extraction (Open IE) systems
(Mausam, 2016) such as ReVerb (Etzioni et al.,
2011) and OpenIE6 (Kolluru et al., 2020) can extract triples, or *facts*, of the form (*subject phrase*,
relation phrase, *object phrase*), which can be denoted as (*s, r, o*), from text (e.g., Wikipedia articles) without using any pre-defined ontology. Open knowledge base (Open KB) is constructed using these Open IE triples where the subject phrases and object phrases are nodes and relation phrases are labels on edges connecting the nodes in the graph.
Open knowledge base completion (Open KBC) is the task of discovering new links between nodes using the graph structure of the Open KB. Knowledge graph embedding (KGE) models are typically used for the Open KBC task, where they are asked to answer questions of the form (*s, r,* ?) and (?*, r, o*).
Research in Open KBC has been restricted to English (Vashishth et al., 2018) due to lack of Open KBs in other languages. We aim to study multilingual Open KBC, with the motivation that the information available in high resource languages like English may help when inferring links in Open KBs that use low resource languages like Telugu.
Moreover, intuitively, if all the information in different languages can be pooled together, then it may help the model learn better, and allow information flow across Open KBs in different languages.
We design the first multilingual Open KB construction pipeline (shown in Figure 1) using a multilingual Open IE system, GEN2OIE (Kolluru et al.,
2022). We find that coreference resolution is missing in existing Open KB construction (Gashteovski et al., 2019) but is important for increasing the coverage of facts (as described in Figure 4). We re-train a recent coref model (Dobrovolskii, 2021)
using XLM-R (Conneau et al., 2020) as the underlying multilingual encoder and add it to our pipeline.
For constructing a high quality test set, we use 988 manually verified facts in English. For extending to other languages, we automatically translate English facts. The dataset thus constructed, called mOKB6, contains 42K facts in six languages: English, Hindi, Telugu, Spanish, Portuguese, and Chinese.
We report the first baselines for multilingual Open KBC task. We find that they are able to benefit from information in multiple languages when compared to using facts from a single language.
Translations of Open KB facts also help the models.
However, we notice that although the multilingual KGE models learn facts in a particular language, they struggle to remember the same fact, when queried in another language with different script.
## 2 Related Work
Multilingual Open KBC datasets are absent in literature to the best of our knowledge, although multiple English Open KBC datasets are available.
OLPBench (Broscheit et al., 2020), derived from OPIEC (Gashteovski et al., 2019), is a large-scale Open KBC dataset that contains 30M triples and is constructed from English Wikipedia using MinIE
system (Gashteovski et al., 2017). The evaluation data contains 10K triples randomly sampled from 1.25M *linked* triples. ReVerb45K (Vashishth et al.,
2018) and ReVerb20K (Galárraga et al., 2014) are smaller Open KBC datasets constructed from Clueweb09 corpus2 using ReVerb Open IE system
(Fader et al., 2011). Both the datasets keep only those tuples in which both the *subject phrase* and object phrase link to a finite set of Freebase entities.
Multilingual Open IE (mOpenIE) systems like GEN2OIE (Kolluru et al., 2022) and Multi2OIE
(Ro et al., 2020) enable extracting facts from multiple languages. We use the GEN2OIE model for constructing mOKB6 dataset as it is trained with language-specific facts transferred from English, while Multi2OIE relies on zero-shot transfer for languages other than English.
## Knowledge Graph Embedding (Kge) Models:
Conventional KGE models like TransE (Bordes et al., 2013), ComplEx (Trouillon et al., 2016),
ConvE (Dettmers et al., 2018), and TuckER (Balazevic et al., 2019) have been used for Open KBC task (Gupta et al., 2019; Broscheit et al., 2020; Chandrahas and Talukdar, 2021; Kocijan and Lukasiewicz, 2021). Given a triple (*s, r, o*),
these models encode the subject phrase, *relation* phrase, and *object phrase* from free text, and pass the encodings to a triple-scoring function, which is optimized using binary cross entropy loss. ComplEx has also been used for multilingual closed KBC task (Chakrabarti et al., 2022).
Pretrained language models like BERT (Devlin et al., 2019) have been used in KGE models for the KBC task (Lovelace and Rosé, 2022; Lv et al.,
2022; Chandrahas and Talukdar, 2021; Kim et al.,
2020). SimKGC (Wang et al., 2022) is the state of the art KGE model on closed KBC task. It computes the score of a triple (*s, r, o*) as the cosine similarity of the embeddings of (s; r) and (o), computed using two separate pretrained BERT models without any weight sharing.
2http://www.lemurproject.org/clueweb09.php/
## 3 Dataset Curation
We aim to construct a *dense* multilingual Open KB
that maximizes the information about a given realworld entity, which may be represented as multiple nodes across languages. Therefore, we consider those Wikipedia articles3that are available in six languages: English, Hindi, Telugu, Spanish, Portuguese, and Chinese4. This will also help the model learn from facts in high resource language like English and answer queries in low resource language like Telugu. We work with 300 titles randomly sampled from the ones common among all six languages (found using MediaWiki-Langlinks
(MediaWiki, 2021)). Thus, we extract facts from 6×300 Wikipedia articles. We discuss the three stages of our pipeline below.
Stage 1 We first process each Wikipedia article through a coreference resolution system. Although language-specific end-to-end neural coref models have been developed (Žabokrtský et al., 2022; Xia and Van Durme, 2021), multilingual models that work on all our languages of interest are absent in the literature. Therefore, we retrain wl-coref
(Dobrovolskii, 2021) with XLM-R (Conneau et al.,
2020) on the English training data (available in OntoNotes (Weischedel et al., 2013)) that can work zero-shot for other languages.
Coref models detect and cluster mentions, but do not identify a canonical cluster name, which is needed for standardizing all the mentions in the cluster. To find cluster names, entity linking systems such as mGENRE (De Cao et al., 2022) or Wikipedia hyperlinks can be used. However, we found that they result in low recall, particularly for low resource languages. Thus, we employ a heuristic to find the cluster name and replace each of the coreferent mentions with it. The score for each mention is represented by a tuple, computed as:
Score(mention phrase) = (\#proper nouns, \#nouns,
\#numerals, \# adjectives, \#pronouns, \#verbs). The tuple is ordered according to the importance of each field (POS tags) for the cluster name, which is determined empirically. Two tuples are compared index-wise with higher priority given to lower indices to determine the best scoring mention that is chosen as the canonical name (Table 1).
Stage 2 We use GEN2OIE to extract Open IE
triples from the coreference resolved sentences.
3Wikidump of April 02, 2022 4languages are chosen to match availability of Gen2OIE
Figure 1: Our three-staged multilingual Open KB construction pipeline for mOKB6. mCoref is multilingual
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
coreference resolution system, having XLM-R (Conneau et al., 2020) encoder based wl-coref (Dobrovolskii, 2021), and mOpenIE is multilingual open information extraction system, consisting of GEN2OIE (Kolluru et al., 2022).
| Mentions | Scores | Cluster Name |
|--------------|---------------|----------------|
| Barack Obama | (2,0,0,0,0,0) | |
| Obama | (1,0,0,0,0,0) | Barack Obama |
| He | (0,0,0,0,1,0) | |
Table 1: Parts of speech tags are used to find the canoniccal name of the coreferent cluster of entity mentions.
Stage 3 Similar to Gashteovski et al. (2019), we apply various filters to remove *noisy* triples that have empty or very long arguments, or have less confidence than 0.3 (as assigned by GEN2OIE).
We further only keep triples that have the article's title as either the subject phrase or *object phrase*,
to avoid generic or specific triples, valid only in the particular context. Examples of *contextual* triples
(Choi et al., 2021) are discussed in Appendix E.
See Appendix A for further data curation details.
These automatically extracted triples form the train set of mOKB6. To form a high quality test set in six languages with limited access to experts in all languages, the test set is created in a semiautomatic way. We sample 1600 English triples from the train set (which are subsequently filtered)
and manually remove noisy triples. We use interannotation agreement between two annotators to check if they both agree that the given triple is noisy or clean. With an agreement of 91%, we retain 988 English triples, which we automatically translate to the other five languages. As illustrated in Figure 2, to translate a triple, we convert it to a sentence after removing tags and use Google translate5for translating the triple-converted sentence to the remaining five languages. We observed high quality of translated triples, with 88% satisfactory translations as determined by native-speakers of three languages on a set of 75 translated triples. To get the Open IE subject phrase, *relation phrase* and object phrase tags, we project the labels from the original English triple to the translated sentence using word alignments (Kolluru et al., 2022). Finally, we are left with 550 triples in each language after removing examples where some labels could not be aligned. We use these 6×550 triples as the
![2_image_2.png](2_image_2.png)
test sets. The train and dev sets are created from the remaining triples in each language such that the dev set has 500 randomly sampled triples (Table 2).
Figure 2: Method to translate Open IE triple using Google translate, and followed by label projection using word alignments (Kolluru et al., 2022).
We analyse the entity overlap across languages and find that on an average, a test entity (which is present in either the subject phrase or *object phrase* of a test tuple) is present 17.73 times in English, 0.94 times in Hindi, 0.47 times in Telugu, 2.33 times in Spanish, 1.69 times in Portuguese, and 1.45 times in Chinese train set.
Our construction pipeline improves over OPIEC
in three ways: (1) we use a multilingual Open IE
system, instead of an English-specific Open IE system like in OPIEC, enabling us to curate Open KBs in many languages, (2) we add a multilingual coreference resolution system in our pipeline, and
(3) the English test triples are manually verified.
Further, we manually evaluate and review the noise at each step of data curation in Section 4.
| En | Hi | Te | Es | Pt | Zh | |
|-----------|-------|------|------|------|------|------|
| #entity | 20637 | 4625 | 3972 | 5651 | 5304 | 5037 |
| #relation | 7870 | 2177 | 1907 | 2823 | 2644 | 2325 |
| #train | 20195 | 2786 | 1992 | 3966 | 3528 | 3420 |
Table 2: Statistics of individual Open KBs in mOKB6 in English (En), Hindi (Hi), Telugu (Te), Spanish (Es),
Portuguese (Pt), and Chinese (Zh). The dev and test set for each Open KB contain 500 and 550 triples each.
## 4 Noise Evaluation
Curating an Open KB involves various stages and each stage induces its noise in the construction pipeline (Gashteovski et al., 2019). We manually evaluate the noise induced at each stage of our pipeline (Figure 1) and discuss the same in this section. We ask native speakers of four (out of six)
languages - English, Hindi, Telugu, and Chinese to assess the output quality, or precision, of each stage as discussed below.
In the first stage, we assess the performance of the coreference resolution system over Wikipedia articles. We find a high precision of 95.5% in coref's mention clustering and 89.82% accuracy in finding canonical cluster name (using the heuristic illustrated in Table 1), computed over 40 randomly sampled coref clusters (10 in each language).
For evaluating the Open IE system, GEN2OIE,
in the second stage, we mark an extraction of a sentence as correct if it has syntactically correct arguments and it is coherent with the sentence. We get an average precision of 63.4% on 80 extractions
(20 in each language).
We evaluate the triples, or Open KB facts, at the last stage after passing through various noiseremoving filters. Note that these triples also form the train set (and dev set) in mOKB6 dataset. We mark triples as correct when they contain realworld entities, and also, factual information about them. If the triple is very generic or contextual (see Appendix E), it is marked as incorrect. We find the train (and dev) set quality to be 69.3%, averaged over 80 triples in four languages.
## 5 Experiments
Our experimental study on multilingual open KBC task investigates the following research questions:
1. Does the KGE model benefit from facts in different languages? (Section 5.1)
2. Can translation help transfer among languages? (Section 5.2)
3. Does the KGE model remember facts seen across different languages? (Section 5.3)
We use SimKGC model (Wang et al., 2022) with pretrained mBERT initialization to run our experiments, after comparing with recent KGE models
(Appendix C). For evaluation, we use three metrics
- hits at rank 1 (H@1), hits at rank 10 (H@10), and mean reciprocal rank (MRR). The formal definitions of them are provided in Appendix B. We discuss further model training details in Appendix D.
## 5.1 Training On Multilingual Facts
We train and compare monolingual model, called MONO, with multilingual models, UNION and UNION w/o En. In MONO, we train one model for each language using its respective Open KB,
whereas in UNION, a single model is trained on six languages' Open KBs together. UNION outperforms MONO in all languages by an average of 4.6% H@10 and 2.8% MRR (see Table 3), which provides evidence of information flow across languages and the model benefits from it.
To check the extent of flow from (high-resource)
English to the other languages, we also train on the five languages except English, which we call UNION w/o En. We find UNION w/o En also outperforms MONO by 2.7% H@10 and 1.2% MRR
over the five languages, hinting that interlingual transfer is more general and pervasive.
## 5.2 Open Kb Facts Translation
Apart from relying only on multilingual transfer in the embedding space, we analyse the effect of using translated triples in the training of the KGE model. We translate the English training triples6to the other five languages (Section 3) and train monolingual models using only the translated triples (TRANS). To leverage facts present in each language's Open KB, we make MONO+TRANS, where we add language-specific MONO data to the translated triples. Table 3 shows that MONO+TRANS is better than MONO by a large margin of 15.5% H@1, 29.2% H@10, and 20.0% MRR, averaged over five languages. Also, MONO+TRANS improves over TRANS by 2.1%
H@10 and 2.0% MRR, showcasing the importance of facts in each language's Open KBs.
To effectively gain from transfer in both the embedding space as well as translation, we introduce UNION+TRANS. We train one model for each language, on the combination of UNION triples and the translated train triples from English Open KB to that language. UNION+TRANS is better than UNION by 25.9% H@10 and 18.4% MRR. This suggests that the model is able to benefit from English facts when they are translated to the query language, unlike in UNION where the English facts are present only in English.
| English (En) | Hindi (Hi) | Telugu (Te) | Spanish (Es) | Portuguese (Pt) | Chinese (Zh) | | | | | | | | | | | | |
|----------------|----------------|---------------|------------------------------------------------------------------------------|------------------------------------------------------------|----------------|--------------------|---------------|---------------|----------|------|-----|-----|------|-----|-----|------|-----|
| H@1 | H@10 | MRR | H@1 | H@10 | MRR | H@1 | H@10 | MRR | H@1 | H@10 | MRR | H@1 | H@10 | MRR | H@1 | H@10 | MRR |
| MONO | 14.8 38.7 22.8 | 3.0 14.8 | 7.2 | 1.5 | 8.1 | 3.9 | 6.4 23.7 12.3 | 6.3 21.7 11.4 | 2.4 13.1 | 6.2 | | | | | | | |
| UNION w/o En | 5.7 21.5 10.9 | 2.9 15.4 | 7.4 | 1.8 10.2 | 4.9 | 8.1 27.8 14.5 | 6.7 26.1 12.9 | 3.2 15.5 | 7.5 | | | | | | | | |
| UNION | 16.7 40.8 24.8 | 3.6 16.6 | 8.1 | 1.5 | 9.3 | 4.5 10.6 32.2 17.6 | 9.7 29.3 16.6 | 4.0 18.8 | 8.9 | | | | | | | | |
| TRANS | - | - | - 20.5 47.6 29.7 | 8.7 28.7 15.5 23.2 50.6 32.4 20.5 50.7 30.5 14.0 39.4 22.5 | | | | | | | | | | | | | |
| MONO+TRANS | - | - | - 20.2 45.4 28.4 14.3 38.5 22.2 23.5 51.5 32.9 21.4 48.9 30.7 17.9 43.2 26.6 | | | | | | | | | | | | | | |
| UNION+TRANS | - | - | - 23.3 49.7 32.3 15.1 38.5 23.1 23.9 52.4 33.4 23.5 52.1 33.1 16.9 43.6 26.0 | | | | | | | | | | | | | | |
## 5.3 Cross-Lingual Memorization
Pretrained multilingual language models such as mBERT have demonstrated strong cross-lingual transfer capabilities (Wu and Dredze, 2019). We investigate cross-lingual memorization of the KGE
model by showing facts in one language and querying the same facts in other five languages. For each language, L, we take the UNION model and train it further on the test set of that language's Open KB,
which we call MEMORIZEL model. Then, we test each MEMORIZEL model on the six test sets. Since the test sets (in mOKB6 dataset) of the different languages contain the same facts, this experiment allows us to investigate cross-lingual memorization. We provide the H@10 scores of MEMORIZE
models in Figure 3 and the performance on other metrics (H@1 and MRR) is reported in Table 7.
The model achieves at least 97% H@10 when tested on the language used for training (diagonal). We observe that there is relatively good crosslingual memorization among languages that share the same script (Latin in English, Spanish, and Portuguese), but the model struggles to remember facts when seen in languages of different scripts. Many entities look similar in shared scripts, possibly leading to better information transfer. For example, the MEMORIZEEn achieves H@10 of 50.7% in Spanish (Es) compared to 22.3% in Chinese (Zh) and 11% in Telugu (Te).
## 6 Conclusion And Future Work
We create and release the mOKB6 dataset, the first multilingual Open Knowledge Base Completion dataset with 42K facts in six languages: English, Hindi, Telugu, Spanish, Portuguese, and Chinese.
Its construction uses multilingual coreference resolution, entity-mention cluster naming, multilingual open information extraction and various filtering
![4_image_0.png](4_image_0.png)
steps to improve the quality of the extracted facts.
We also report the first baselines on the task using the existing state of the art KGE models trained with facts from different languages using various augmentation strategies.
Our work opens many important research questions: (1) Can we develop better strategies to combine facts in different languages? (2) Can we build models that achieve strong information transfer across unrelated languages with same or different scripts? (3) Can we train the neural model to ignore contextual triples (Appendix E), thus improving overall performance? and (4) Can tying the same entities across various languages help the model generalize better? We leave these questions to be addressed in future work.
## 7 Acknowledgements
Keshav was supported by TCS Research Fellowship during his PhD. Mausam is supported by grants from Huawei, Google, Verisk and IBM, and a Jai Gupta Chair Fellowship. He also acknowledges Google and Yardi School of AI travel grants.
Soumen is partly supported by a Jagadish Bose Fellowship and a grant from Cisco. We thank IIT
Delhi HPC facility for compute resources.
## 8 Limitations
Although multilingual, the constructed open KB
is limited to the sampling of the chosen six languages. We do not know how well the system will generalize to various language families that have not been considered here. Further, even among the languages considered, the performance of even the best-performing systems, as measured through H@1 is still in the low 20's. Therefore the models are not yet ready to be deployed for real-world applications.
## References
Ivana Balazevic, Carl Allen, and Timothy Hospedales.
2019. TuckER: Tensor factorization for knowledge graph completion. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5185–5194, Hong Kong, China. Association for Computational Linguistics.
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.
Samuel Broscheit, Kiril Gashteovski, Yanjie Wang, and Rainer Gemulla. 2020. Can we predict new facts with open knowledge graph embeddings? a benchmark for open link prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2296–2308, Online. Association for Computational Linguistics.
Soumen Chakrabarti, Harkanwar Singh, Shubham Lohiya, Prachi Jain, and Mausam . 2022. Joint completion and alignment of multilingual knowledge graphs. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11922–11938, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
. Chandrahas and Partha Talukdar. 2021. OKGIT:
Open knowledge graph link prediction with implicit
types. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 2546–
2559, Online. Association for Computational Linguistics.
Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In *Proceedings of* the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–
1734, Doha, Qatar. Association for Computational Linguistics.
Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, and Michael Collins. 2021. Decontextualization: Making sentences stand-alone. *Transactions of the Association* for Computational Linguistics, 9:447–461.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzman, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL Conference, pages 8440–8451.
Nicola De Cao, Ledell Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, and Fabio Petroni. 2022. Multilingual autoregressive entity linking. Transactions of the Association for Computational Linguistics, 10:274–290.
Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In *Proceedings of* the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI
Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI
Press.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Vladimir Dobrovolskii. 2021. Word-level coreference resolution. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7670–7675, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. 2011. Open information extraction: The second generation. In *IJCAI*
2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011, pages 3–10. IJCAI/AAAI.
Anthony Fader, Stephen Soderland, and Oren Etzioni.
2011. Identifying relations for open information extraction. In *Proceedings of the 2011 Conference on* Empirical Methods in Natural Language Processing, pages 1535–1545, Edinburgh, Scotland, UK. Association for Computational Linguistics.
Luis Galárraga, Geremy Heitz, Kevin Murphy, and Fabian M. Suchanek. 2014. Canonicalizing open knowledge bases. New York, NY, USA. Association for Computing Machinery.
Kiril Gashteovski, Rainer Gemulla, and Luciano del Corro. 2017. MinIE: Minimizing facts in open information extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2630–2640, Copenhagen, Denmark. Association for Computational Linguistics.
Kiril Gashteovski, Sebastian Wanner, Sven Hertling, Samuel Broscheit, and Rainer Gemulla. 2019.
Opiec: An open information extraction corpus. In Proceedings of the Conference on Automatic Knowledge Base Construction (AKBC).
Swapnil Gupta, Sreyash Kenkre, and Partha Talukdar. 2019. CaRe: Open knowledge graph embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 378–388, Hong Kong, China. Association for Computational Linguistics.
Bosung Kim, Taesuk Hong, Youngjoong Ko, and Jungyun Seo. 2020. Multi-task learning for knowledge graph completion with pre-trained language models. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1737–1743, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *3rd International Conference on Learning Representations,*
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Vid Kocijan and Thomas Lukasiewicz. 2021. Knowledge base completion meets transfer learning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP),
Punta Cana, Dominican Republic. Association for Computational Linguistics.
Keshav Kolluru, Vaibhav Adlakha, Samarth Aggarwal, Mausam, and Soumen Chakrabarti. 2020. OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction. In Proceedings of
the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3748–
3761, Online. Association for Computational Linguistics.
Keshav Kolluru, Muqeeth Mohammed, Shubham Mittal, Soumen Chakrabarti, and Mausam . 2022.
Alignment-augmented consistent translation for multilingual open information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2502–2517, Dublin, Ireland. Association for Computational Linguistics.
Justin Lovelace and Carolyn Rosé. 2022. A framework for adapting pre-trained language models to knowledge graph completion. In *Proceedings of* the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5937–5955, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, and Jie Zhou. 2022. Do pre-trained models benefit knowledge graph completion? a reliable evaluation and a reasonable approach. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3570–3581, Dublin, Ireland. Association for Computational Linguistics.
Mausam. 2016. Open information extraction systems and downstream applications. In International Joint Conference on Artificial Intelligence.
MediaWiki. 2021. Api:langlinks - mediawiki,. [Online; accessed 02-April-2022].
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics.
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A
Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.
Youngbin Ro, Yukyung Lee, and Pilsung Kang. 2020.
Multiˆ2OIE: Multilingual open information extraction based on multi-head attention with BERT. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1107–1117, Online.
Association for Computational Linguistics.
Théo Trouillon, Johannes Welbl, Sebastian Riedel, Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *Proceedings of The 33rd International Conference on Machine Learning*, volume 48 of *Proceedings of Machine Learning Research*, pages 2071–2080, New York, New York, USA. PMLR.
Shikhar Vashishth, Prince Jain, and Partha Talukdar.
2018. CESI: Canonicalizing open knowledge bases using embeddings and side information. In *Proceedings of the 2018 World Wide Web Conference*,
WWW '18, pages 1317–1327, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee.
Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. SimKGC: Simple contrastive knowledge graph completion with pre-trained language models.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4281–4294, Dublin, Ireland.
Association for Computational Linguistics.
Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, and Michelle Franchini. 2013. Ontonotes release 5.0. In Linguistic Data Consortium, Philadelphia, PA.
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet:
Extracting high quality monolingual datasets from web crawl data. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 4003–4012, Marseille, France. European Language Resources Association.
Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics.
Patrick Xia and Benjamin Van Durme. 2021. Moving on from OntoNotes: Coreference resolution model transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5241–5256, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics.
Zdenek Žabokrtský, Miloslav Konopík, Anna ˇ
Nedoluzhko, Michal Novák, Maciej Ogrodniczuk, Martin Popel, Ondˇrej Pražák, Jakub Sido, Daniel Zeman, and Yilun Zhu. 2022. Findings of the shared task on multilingual coreference resolution. In *Proceedings of the CRAC 2022 Shared Task on* Multilingual Coreference Resolution, pages 1–17, Gyeongju, Republic of Korea. Association for Computational Linguistics.
## Mokb6: A Multilingual Open Knowledge Base Completion Benchmark (Appendix) A Dataset Curation
As discussed in Section 3, we construct mOKB6 dataset in three stages after extracting the Wikipedia articles (using WikiExtractor7) from the Wikidump of April 02, 2022. We run our construction pipeline (as shown in Figure 1) for all six languages on a single V100 (32 GB) GPU, which required 14 hours of computation to create mOKB6 dataset.
In the first stage, we keep the sentences containing at least 6 and at most 50 tokens since we find that most of the short sentences are headings or sub-headings present in Wikipedia articles, and very long sentences can't be input to GEN2OIE
(in second stage) due to maximum sequence length constraint of 1024 in mT5 (Xue et al., 2021) based GEN2OIE. This filtering step discards 18.9% of sentences on an average in all six languages. We use Stanza (Qi et al., 2020) to perform sentenceand word-segmentation on Wikipedia articles in all six languages. After filtering the sentences, the articles are processed for coreference resolution using XLM-R (Conneau et al., 2020) encoder based wlcoref (Dobrovolskii, 2021), followed by replacing the coreferent cluster mentions with their canonical cluster name using the heuristic discussed in Section 3.
In the second stage, the coreference resolved articles are passed through GEN2OIE to get the Open IE triples. The confidence scores for these triples are computed using label rescoring, for which we refer the readers to Kolluru et al. (2022) for more details.
Finally, in the last stage, we apply various filters, adapted from Gashteovski et al. (2019), to remove triples that are of no interest to Open KBC task, like the triples: (1) having any of its argument or relation empty, (2) containing more than 10 tokens in any of its arguments or relation, (3) having confidence score less than 0.3, (4) containing pronouns (found using Stanza) in its arguments,
(5) having same subject and object (i.e. self loops),
and (6) that are duplicates. These filters keep 91.6% of the triples obtained from stage 2 in all six languages.
Further in the last stage, in order to create a *dense* Open KB containing minimum noise and maximum facts about the entities, we keep the triples having the Wikipedia article's title as either the subject phrase or *object phrase* and discard the rest.
We do this by finding all the coreference clusters
(of entity mentions) that contain the titles, then get the entities, or cluster names, of those clusters using the heuristic discussed in section 3, and keep those triples that contain these cluster names. This filtering step retains 23.6% of the triples.
## B Metrics
We follow the previous works (Wang et al., 2022)
on the evaluation methodology of Open KBC task and apply it to the multilingual Open KBC task, containing facts in multiple languages. Given an Open KB, containing a finite set of entities and open relations, the KGE model answers forward and backward queries of the form (*s, r,* ?) and
(?*, r, o*) respectively. The model ranks all the entities based on their correctness with, say, s and r in the forward query. Further, the evaluation is in *filtered* setting, where the other known correct answers, apart from o, are removed from rank list.
The commonly used evaluation metrics are hits at rank N (H@N), where N is a natural number, and mean reciprocal rank (MRR). Suppose, the model ranks o at R among all entities. Then, H@N
measures how many times R is less than or equal to N. MRR is the average of reciprocal ranks ( 1R
).
Both, H@N and MRR, are computed as average over both forms of queries over the full test set.
## C Knowledge Graph Embedding Models
SimKGC (Wang et al., 2022) is a text-based KGE
model that uses two unshared pretrained BERT
models (Devlin et al., 2019) for encoding *(subject* phrase; relation phrase) and *object phrase* separately. GRU-ConvE (Kocijan and Lukasiewicz, 2021) encodes both the *relation phrase* and *argument phrase* from their surface forms using two unshared GRU (Cho et al., 2014). CaRe (Gupta et al., 2019) learns separate embeddings for each argument phrase and uses a bi-directional GRU to encode the *relation phrase* from its surface form.
Both, GRU-ConvE and CaRe, are initialised with Glove embeddings (Pennington et al., 2014).
![9_image_0.png](9_image_0.png)
To choose the best model for our experiments
(Table 3, Figure 3), we train the recent knowledge graph embedding (KGE) models - CaRe" GRUConvE and SimKGC on the English Open KB in mOKB6. We report performance in Table 4 using the three metrics: hits at rank 1 (H@1), hits at 10
(H@10), and mean reciprocal rank (MRR). We find that SimKGC with BERT encoder outperforms the other two models.
| H@1 | H@10 | MRR | |
|----------------|--------|-------|------|
| CaRe | 6.6 | 11.3 | 8.3 |
| GRU-ConvE | 12.4 | 27.8 | 17.8 |
| SimKGC (BERT) | 16.1 | 40.0 | 24.3 |
| SimKGC (mBERT) | 14.8 | 38.7 | 22.8 |
| SimKGC (XLM-R) | 13.8 | 35.8 | 21.3 |
Since BERT supports only English language, we replace BERT in SimKGC with multilingual pretrained language models like mBERT (Devlin et al.,
2019) or XLM-R (Conneau et al., 2020), to extend SimKGC model to other languages. We find in Table 4 that SimKGC with mBERT is better than with XLM-R by 2.9% H@10 and 1.5% MRR, possibly because mBERT (and mOKB6) uses Wikipedia while XLM-R uses CommonCrawl (Wenzek et al.,
2020) during pre-training. Thus, we use SimKGC
with mBERT as the underlying encoder to run our experiments for all the languages.
## D Kge Model Training Details
We use the code from official repositories of the KGE models - SimKGC (Wang et al., 2022),
GRU-ConvE (Kocijan and Lukasiewicz, 2021), and CaRe (Gupta et al., 2019) for our experiments. The models are trained using Adam optimizer (Kingma and Ba, 2015) on a single A100 (40 GB) GPU
with three different random seeds and we report the average of three evaluation runs.
We do not perform hyperparameter search trials, except for batch size, and use the default hyperparameters from the respective codes of KGE
models (see Table 5). We use early stopping to find the best model checkpoints based on HITS@1.
The dev set is different for each baseline: MONO,
TRANS, MONO+TRANS, and UNION+TRANS use individual language's dev set, whereas UNION w/o En and UNION use the English dev set. We report the performance of baseline models on the dev sets in Table 9 and Table 10.
| Hyperparameter | SimKGC | GRU-ConvE | CaRe |
|------------------|----------|-------------|--------|
| #epochs | 100 | 500 | 500 |
| #patience epochs | 10 | 10 | 10 |
| learning rate | 3e-5 | 3e-4 | 1e-3 |
| dropout | 0.1 | 0.3 | 0.5 |
| batch size | 256 | 1024 | 128 |
| additive margin | 0.02 | N/A | N/A |
Table 5: Hyperparameters of the KGE models.
We provide the number of trainable parameters of each KGE model in Table 6. Based on the batch size and model size, different experiments consume different GPU hours. To train on English Open KB (in mOKB6 dataset), CaRe and GRU-ConvE
models took 2.5 hours and 0.5 hours, respectively, whereas SimKGC takes nearly 1 hour of GPU time.
| KGE model | #trainable parameters |
|----------------|-------------------------|
| CaRe | 12,971,423 |
| GRU-ConvE | 12,085,523 |
| SimKGC (BERT) | 216,620,545 |
| SimKGC (mBERT) | 355,706,881 |
| SimKGC (XLM-R) | 1,119,780,865 |
Table 6: Number of trainable parameters in the KGE
models.
English Hindi Telugu Spanish Portuguese Chinese
English 68.4 97.1 78.8 3.4 17.2 8.3 1.6 11 5 17.8 50.7 28.6 17 44.6 26 5.4 22.3 11.1
Hindi 19 42.2 26.7 80.6 99.5 88.3 2.4 12.5 5.9 12.3 36 19.9 12.3 33.9 19.7 5.3 21.9 10.8 Telugu 19.5 42.2 27.2 4.3 18.7 9.4 74.4 99.5 84.2 10.9 35.4 18.9 10.7 34 18.5 4.7 21.4 10.1 Spanish 27.9 60.4 38.8 4.1 17.8 8.9 1.8 10.7 5.1 84 100 90.3 37.6 74 50.1 6.5 24.9 12.8
Portuguese 27.8 58.7 38.2 4.4 18.2 9.3 1.7 10.5 5.1 41.5 78.5 53.6 84.2 99.9 90.8 6.6 26 13.2
Chinese 22.1 48.4 30.6 3.5 18.5 8.8 1.8 12.2 5.4 14.8 42.8 24.2 15.7 41.6 24.1 81.6 99.8 89.2
## E Contextual Triples
Open IE triples are of various kinds and not all of them can be used for Open KBC task. Various filtering steps are used to remove some of these in data curation (Section 3). We define *contextual* triples as another kind of noisy triples, which are specific to, and are not interpretable out of, the context of text from which they are extracted.
(Max Born; continued; *scientific work*)
(Robb Gravett; won; *the championship*)
(George Herbert Walker Bush; was; *out of touch*)
(Christianity; is; *dominant*)
Table 8: Examples of contextual triples.
From the first two triples in Table 8, it is unclear which scientific work *Max Born* continued, or which championship *Robb Gravett* has won. The last two triples are too specific to the context and contain no factual information.
| English (En) | Hindi (Hi) | Telugu (Te) | Spanish (Es) | Portuguese (Pt) | Chinese (Zh) | | | | | | | | | | | |
|----------------|-------------------------------------------------------------------------------|------------------------------------------------------|-------------------------------------|----------------------------------|-----------------------------------|-------------|-----|-----|------|-----|-----|------|-----|-----|------|-----|
| H@1 | H@10 | MRR | H@1 | H@10 | MRR H@1 | H@10 | MRR | H@1 | H@10 | MRR | H@1 | H@10 | MRR | H@1 | H@10 | MRR |
| MONO | 16.2 38.7 23.9 18.2 39.4 25.9 8.5 | 20 12.5 17.3 36.6 23.7 17.6 39.6 25.3 10.8 31.9 17.8 | | | | | | | | | | | | | | |
| TRANS | - | - | - | 8.1 23.7 13.5 3.3 15.4 | 7.5 12.9 33.6 20.3 12.6 37.2 20.6 | 5 20.8 10.3 | | | | | | | | | | |
| MONO+TRANS | - | - | - 20.8 43.2 28.6 7.8 24.8 13.4 20.2 | 46 28.8 | 21 45.9 29.2 10.6 30.1 16.7 | | | | | | | | | | | |
| UNION | 19.9 39.6 26.4 14.5 38.2 22.4 5.9 | 20 10.6 19.8 43.2 27.9 19.7 43.8 | 28 11.2 | 33 18.8 | | | | | | | | | | | | |
| UNION w/o En | 5.8 19.5 10.6 15.4 39.3 23.3 6.3 20.5 11.1 19.4 41.6 26.4 16.9 42.9 25.9 11.3 | 33 18.4 | | | | | | | | | | | | | | |
| UNION+TRANS | - | - | - 20.8 44.9 28.8 7.3 27.1 | 14 21.4 45.3 29.6 19.4 49.1 29.1 | 6.9 | 31 15.1 | | | | | | | | | | |
| H@1 | H@10 | MRR | |
|----------------|--------|-------|------|
| CaRe | 7.1 | 11.1 | 8.5 |
| GRU-ConvE | 16.8 | 31.5 | 22.1 |
| SimKGC (BERT) | 20.3 | 40.1 | 27.1 |
| SimKGC (mBERT) | 16.2 | 38.7 | 23.9 |
| SimKGC (XLM-R) | 17 | 36.6 | 23.2 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8
✗ A2. Did you discuss any potential risks of your work?
There are no potential risks of our work to our knowledge.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3,4
✓ B1. Did you cite the creators of artifacts you used?
3,4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Abstract B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix D
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix D
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix D
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A, D
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
4 D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
rabin-etal-2023-covering | Covering Uncommon Ground: Gap-Focused Question Generation for Answer Assessment | https://aclanthology.org/2023.acl-short.20 | Human communication often involves information gaps between the interlocutors. For example, in an educational dialogue a student often provides an answer that is incomplete, and there is a gap between this answer and the perfect one expected by the teacher. Successful dialogue then hinges on the teacher asking about this gap in an effective manner, thus creating a rich and interactive educational experience. We focus on the problem of generating such gap-focused questions (GFQs) automatically. We define the task, highlight key desired aspects of a good GFQ, and propose a model that satisfies these. Finally, we provide an evaluation by human annotators of our generated questions compared against human generated ones, demonstrating competitive performance. | # Covering Uncommon Ground: Gap-Focused Question Generation For Answer Assessment
Roni Rabin1 Alexandre Djerbetian1 Roee Engelberg1,2 **Lidan Hackmon**1 Gal Elidan1,3 Reut Tsarfaty1,4 **Amir Globerson**1,5 1 Google Research 2 Computer Science Dept., Technion 3 Statistics Dept., Hebrew University of Jerusalem 4 Computer Science Dept., Bar-Ilan University 5 Blavatnik School of Computer Science, Tel Aviv University
{ronir, adjerbetian, roee, lidanh, elidan, reutt, amirg}@google.com
## Abstract
Human communication often involves information gaps between the interlocutors. For example, in an educational dialogue, a student often provides an answer that is incomplete, and there is a gap between this answer and the perfect one expected by the teacher. Successful dialogue then hinges on the teacher asking about this gap in an effective manner, thus creating a rich and interactive educational experience. We focus on the problem of generating such gap-focused questions (GFQs) automatically. We define the task, highlight key desired aspects of a good GFQ, and propose a model that satisfies these. Finally, we provide an evaluation by human annotators of our generated questions compared against human generated ones, demonstrating competitive performance.
## 1 Introduction
Natural language dialogues are often driven by information gaps. Formally, these are gaps between the epistemic states of the interlocutors. Namely, one knows something that the other does not, and the conversation revolves around reducing this gap.
An important example is the education setting where teachers ask students questions, and receive answers that may be incomplete. With the expectation of what a *complete* answer should contain, the teacher then engages in a gap-focused dialogue to help the student to arrive at a complete answer.
There are multiple other application settings of information gaps, including support-line bots, longform Q&A, and automated fact checking.
The core challenge in this setting is how to generate effective questions about the information gap.
In terms of formal semantics and pragmatics, this gap can be viewed as the complementary of the common-ground (Stalnaker, 2002) held by the interlocutors. Somewhat surprisingly, despite much work on dialogue learning (Ni et al., 2022; Zhang et al., 2020) and question generation (Michael et al.,
215 2018; Pyatkin et al., 2020, 2021; Ko et al., 2020),
little attention has been given to generating questions that focus on such information gaps.
The formal traditional approach to representing the dialogic information gap is via the set of propositions that are known to one side but not the other
(Stalnaker, 2002). However, this set can be quite large, and it is also unclear how to turn these propositions into dialogue utterances. We propose an arguably more natural representation: a generated set of natural language questions whose answers represent the information that the teacher needs to ask about to reduce the gap. We call these *gapfocused questions* (GFQs). A key advantage of this representation is that the generated questions can be used directly in the teacher-student dialogue.
Given a complete teacher answer and a partial student answer, there are many questions that could be asked, but some are more natural than others.
For example, consider the complete answer "A man is wearing a blue hat and a red shirt and is playing a guitar", and a student response *"There is a* man playing the guitar". Two candidate questions could be *"What color hat is the man wearing?"* and *"What is the man wearing?"*. The second question is arguably more natural as it does not reveal information that is not in the teacher-student common ground, namely that a hat is being worn.
The above demonstrates some of the complexity of generating effective GFQs, and the need to rely on certain discourse desiderata. In this work we define the GFQ challenge, a novel question generation task, and we detail the desired properties of the generated questions. Subsequently, we provide a model for GFQ generation that aims to satisfy these desiderata, and demonstrate its competitiveness via a task of generating questions to fill the gap between premises and hypotheses in a standard natural language inference (NLI) setup.
In designing desired properties for GFQs, we take inspiration from theories of collaborative
![1_image_0.png](1_image_0.png)
communication, and in particular Grice's maxims
(Grice, 1975). For example, the *maxim of quantity* states that speakers are economic and do not communicate what is already known. Thus, the teacher should not ask about what is already in the common ground with the student. In the above example, this means not asking *"What is the man playing?"*. We describe additional desiderata in §3.
To tackle the GFQ challenge, we show how general-purpose NLP models (question generation, question answering, and constituency parsing) can be used to generate GFQs that satisfy the discourse desiderata. See Figure 1 for an outline of the process. To assess our model, we consider pairs of texts that contain information gaps, and evaluate our ability to capture these gaps using GFQs. Such texts are readily available in NLI datasets that contain pairs of a premise and an entailed hypothesis with less information. We consider the SNLI
dataset (Bowman et al., 2015), and use human annotators to evaluate the merit of our approach relative to GFQs generated by humans.
Our contribution is three-fold. First, we propose the novel setup of gap-focused questions, a key element of a student-teacher discourse as well as other settings such as automated fact checking. Second, we identify desiderata inspired by conversational maxims, and provide a model for generating questions that satisfy them. Third, we demonstrate the merit of our model on an NLI dataset.
## 2 Related Work
Natural dialogue is a key goal of modern NLP and, despite substantial progress, there is still a considerable difference between humans and models.
In this work we focus on dialogues where the bot
(teacher) knows more than the user (student), and the goal is to gradually decrease this knowledge gap via gap-focused follow-up questions.
Several works have focused on the problem of follow-up question generation in dialogues. However, to the best of our knowledge, none of these focus on information gaps as we do. Ko et al. (2020)
introduce the problem of inquisitive question generation, where the goal is to generate questions about facts that are not in the text. This is not done in reference to a complete text, and is thus principally different from our goal. In fact, in our settings, an inquisitive question would typically be a bad GFQ, since it refers to information that is outside the knowledge of both teacher and student.
Prior works considered a related task referred to as answer-agnostic question generation (Scialom et al., 2019), but with a focus on factual questions, whereas the inquistive setting is broader.
Another class of follow-up questions are clarification ones (Rao and Daumé III, 2018), which can also be viewed as a special case of inquistive questions. Again, there is no reference to a complete text that defines the information gap. Finally, there are works on follow-up questions guided by rules as in the SHARC dataset (Saeidi et al., 2018).
Our GFQ setting is also related to the challenge of explainable NLI (Kalouli et al., 2020), namely the task of explaining why a certain sentence entails another. The GFQ output can be viewed as a novel explanation mechanism of why the student text is entailed by the source text, as it explicitly refers to the gap between these texts.
Our work is inspired by novel uses of question generation models, particularly in the context of evaluating model consistency (Honovich et al.,
2021). In these, question generation is used to find
"LLM hallucinations" where the generated text is not grounded in a given reference text. Our task can be viewed as the inverse of the knowledge grounding task, and our particular focus is on the questions generated rather than just pointing to information gaps. An additional line of work in this vein is QA-based semantics, where text semantics are represented via a set of questions rather than a formal graph (e.g., see Michael et al., 2018).
## 3 Criteria For Gap-Focused Questions
Given a complete source text TC and a student text TS, our goal is to construct a model that takes TS and TC as input and produces a set of one or more questions Q that ask about the information gap between TC and TS. If one takes the term
"information gap" literally, there are many such possible questions (e.g., which word appears in TC
but not in TS). In a natural language setting we are obviously interested in questions that are *natural*,
that is, would likely be asked by a human who knows TC and has heard the student description TS. When defining the desiderata for the generated questions, we consider what knowledge is held by the teacher and the student and what information is inside and outside their common ground (see Figure 2). We next identify desired properties for the generated questions, followed by a description of our model for generating gap-focused questions that satisfy these desiderata.
The following desired properties of an effective GFQ are loosely based on collaborative communication concepts (Grice, 1975):
- **P1: Answerability:** Only ask questions that can be answered based on the complete text TC
(areas A ∪ B in Figure 2). This follows from Grice's *maxim of relevance*; speakers say things that are pertinent to the discussion.
- **P2: Answers should not be in the common**
ground: If the student has already demonstrated knowing a fact in TS, there is no reason to ask about it again. Namely, in Figure 2, we don't want to ask about information in B. This pertains to Grice's *maxim of quantity*; speakers are economic, they do not utter information beyond the bare minimum that is necessary to ask the question, and they will refrain from repeating already-known information.
## - **P3: Questions Should Only Use Information**
known to the user: The question itself should rely only on information in TS and not in TC.
For example if TC is "A Woman is wearing a blue hat" and TS is *"A woman is wearing something"*, it is preferable not to ask "What color is the hat?" as it refers to information that did not appear in TS (i.e., that the woman is wearing a hat). This is loosely related to the Grice maxim of manner, where one tries to be clear, brief, and orderly. If we were to ask questions using
![2_image_0.png](2_image_0.png)
information unknown to the user (in area A in figure 2), we may introduce unnecessary details and obscurity into the discussion.1
## 4 The Gfqs Generation Approach
We next describe our modeling approach for the GFQ generation problem, with the goal of capturing the properties described above. Before describing our GFQs generation approach, we briefly outline the NLP components we rely on in the question generation process:
A question generation model G that, given an input text T and a span X ⊂ T, generates questions about T whose answer is X.
A question answering model A, that takes as input a text T and a question Q about the text, and returns the answer or an indication that the question is unanswerable from the text.
A constituency parser P, that takes a text X,
breaks it down into sub-phrases (constituents), and returns a parse tree.
Additional details about these components can be found in appendix C.
We are now ready to describe our approach for generating GFQs. The model generates an ordered set of possible follow-up questions QG via the following steps, which roughly correspond to the desired criteria described in §3:
Step 1: Generate answerable questions (P1).
Using the constituency parser P, we extract the 1Note that in some cases this may only be partially possible and a "hint" must be provided in order to be able to phrase a grammatically correct and semantically sensible question.
spans of all the constituents in the source text TC,
except for those spanning the entire sentence, and single word spans containing functional elements
(e.g., prepositions). For each span X ⊂ TC, we use the question generation model G to generate a set of questions whose answer should be X, thus creating a set of questions that satisfy the answerablity property. We denote this set QT and assign QG = QT .
Step 2: Filter questions whose answers are in the common ground. (P2). We next wish to remove questions that are answerable by the student text TS. To that end, we use the question answering model A, and for each q ∈ QG if A(TS, q) 6=
"UNANSWERABLE", we set QG = QG \ {q}.
2 Step 3: Prefer questions which only use information known to the user (P3). We prefer questions that do not reveal information beyond what is known to the user. This is not always strictly possible and thus, instead of filtering, we rank questions according to the (possibly zero) amount of additional information they reveal. To do so, let R
be all the answers to the questions in QG. By construction R contains spans from TC that the student didn't mention, i.e. these are spans that we would prefer not to appear in the generated questions. For each q ∈ QG, we count the number of items in R
included in q. We sort QG in ascending order by this number and return the first element. We thus return a question that uses the least number of facts unknown to the student.
## 5 Experiments
We next describe an evaluation of our GFQ model.
Data: We use the *SNLI Dataset* (Bowman et al.,
2015) where a Natural language inference (NLI)
pair contains two sentences denoting a premise and a hypothesis, and the relation between them can be entailment, *contradiction* and *neutral*. We focus on pairs labeled as entailment, and filter out those with bi-directional entailment, so that there is a gap between hypothesis and premise. We do not use any data for training, and apply our model to the test partition of the SNLI dataset.
Evaluation Benchmark: In order to compare the quality of our automatically generated ques-2Note that Step 2 will also filter out questions that the student answered incorrectly. This would be an area for improvement in future models.
| Model | Average score |
|---------|-----------------|
| Step 1 | 3.72 |
| Step 2 | 3.86 |
| Step 3 | 3.94 |
| Human | 4.06 |
tions to manually generated ones, we asked human annotators to generate questions for 200 instances of the SNLI test set (see Appendix A for the annotator instructions). We emphasize that these questions were only used for evaluation, as explained below, and not for training the model. They were collected after model design was completed. We release this evaluation dataset to the public, it is available here. See additional details about this dataset in appendix E. Annotator Evaluation of Generated Questions:
As with other generative settings, offline evaluation is challenging. In fact, even if we had human generated questions for all SNLI, using those for evaluation would need to assume that they are exhaustive (otherwise the model can generate a good question but be penalized because it is not in the set generated by humans). Instead, as is commonly done (Ko et al., 2020), we rely on human evaluation. We present annotators with TC, TS and a candidate GFQ q and ask them to provide a 1 − 5 score of how well q functions as a follow-up question (see Appendix A for annotators instructions).
We use 3 annotators per question.
Compared Models: We compare four generation approaches: **Human**: Questions generated by human annotators; **Step 1**: This model selects a random question out of those generated by the question generation model (i.e., Step 1 in §4). We note that this is already a strong baseline because its questions are based on the source text. **Step 2**:
The outcome of Step 2 in §4 where only questions not answerable by the student text are kept. **Step 3**:
The outcome of Step 3, where we additionally aim for questions which use information known to the user.
Results: Table 1 provides the average scores for each of the considered models and the human generated questions. It can be seen that each step contributes to the score, and human generated questions are somewhat better than our final model
| Source text | Student description | Generated question (Step 3) |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|-----------------------------------------|
| A man stands by two face structures on | A man on Easter Island. | Two faces are what on Easter Island? |
| Easter Island. Two young children, one wearing a red striped shirt, are looking in through the window while an adult in a pink shirt watches from behind. | A person in a shirt. | What is one child wearing? |
| A man in a purple jersey is falling down while chasing a player in a green jersey playing soccer | The two soccer players run around chasing each other | What is the man in the cartoon wearing? |
Table 2: Examples of the loss patterns found in the analysis of low scoring questions. See details in the Error Analysis paragraph in section 5.
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
Human: What sport does the referee work in?
Figure 3: An example of the steps of our Gap-Focused Questions model, and a human-generated question.
(**Step 3**). Using the Wilcoxon signed-rank test for paired differences, we found that all differences were significant at p-value ≤ 0.05.
Examples: Figure 3 shows an example of the three stages, and a human generated question. Appendix F provides more examples.
Error Analysis: We analyze cases where our final model (Step 3) received low scores from the annotators (an average score of 3 and lower). In our analysis we have observed three main loss patterns
(sometimes appearing together): (1) Poor question phrasing - these are questions whose structure or choice of words is less natural than if a person were to ask the same question. See example in the first row in Table 2. (2) Questions which include information outside of the teacher-student common ground. These are cases where the minimum criterion defined in Step 3 still results in a question with some information unknown to the user. See examples in the first 2 rows in Table 2. (3) Questions including information outside the complete source text. In rare cases we have found that the question generation model generates questions that include
"hallucinations" or point to issues in the semantic understanding of the complete source text. See the third example in Table 2.
## 6 Conclusion
We consider the task of question generation in a novel setting where there is an information gap between speakers, and the gap-focused questions
(GFQs) aim to reduce this gap. Building on advances in question generation and question answering, we show how to generate useful GFQs that meet several natural criteria inspired by theories cooperative conversation. It is natural to ask whether one can employ a fully generative approach for GFQs using LLMs. This is a natural direction for future study, and we believe that the criteria and design choices we studied here will be significant in defining and evaluating such future work.
## Limitations
We present the first study of generating questions for filling in information gaps. Our method is limited in several ways. First, it focuses on information that is explicitly missing, and does not discuss information that is inaccurate or incomplete in other ways. Second, it only asks one follow-up question and does not address multi-turn dialogue about a student answer, or multiple student answers. Finally, our approach makes somewhat restricted use of the student answer, and it will be better to generate questions that directly uptake information from the student text (Demszky et al., 2021). We leave the deep investigation of these for future work.
## Acknowledgments
We thank Avi Caciularu for constructive feedback on this work.
## Ethics And Impact
Regarding risks, as with any NLP model, care must be taken in application, so that it generates truthful information, and does not introduce biases. However, we think this is not a major concern in our case as our modeling will generate text directly related to the source and student texts. In terms of impact, our approach can be used to improve a wide array of applications, including educational dialogue (e.g., reading comprehension), supportline bots, and automated fact checking.
## References
Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642.
Dorottya Demszky, Jing Liu, Zid Mancenido, Julie Cohen, Heather Hill, Dan Jurafsky, and Tatsunori B
Hashimoto. 2021. Measuring conversational uptake:
A case study on student-teacher interactions. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language* Processing (Volume 1: Long Papers), pages 1638–
1653.
H. P. Grice. 1975. Logic and conversation. In Peter Cole and Jerry L. Morgan, editors, *Syntax and* Semantics: Vol. 3: Speech Acts, pages 41–58. Academic Press, New York.
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. True: Re-evaluating factual consistency evaluation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3905—-3920.
Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021.
Q2: Evaluating factual consistency in knowledgegrounded dialogues via question generation and question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7856–7870.
Aikaterini-Lida Kalouli, Rita Sevastjanova, Valeria de Paiva, Richard Crouch, and Mennatallah ElAssady. 2020. XplaiNLI: Explainable natural language inference through visual analytics. In Proceedings of the 28th International Conference on Computational Linguistics: System Demonstrations,
pages 48–52, Barcelona, Spain (Online). International Committee on Computational Linguistics
(ICCL).
Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics.
Wei-Jen Ko, Te-yuan Chen, Yiyan Huang, Greg Durrett, and Junyi Jessy Li. 2020. Inquisitive question generation for high level text comprehension. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, and Luke S. Zettlemoyer. 2018. Crowdsourcing question-answer meaning representations. In NAACL-HLT.
Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, and Erik Cambria. 2022. Recent advances in deep learning based dialogue systems: A systematic survey.
Artificial intelligence review, pages 1–101.
Valentina Pyatkin, Ayal Klein, Reut Tsarfaty, and Ido Dagan. 2020. QADiscourse - Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 2804–2819, Online. Association for Computational Linguistics.
Valentina Pyatkin, Paul Roit, Julian Michael, Yoav Goldberg, Reut Tsarfaty, and Ido Dagan. 2021. Asking it all: Generating contextualized questions for any semantic role. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 1429–1441, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–
789, Melbourne, Australia. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Sudha Rao and Hal Daumé III. 2018. Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 2737–2746.
Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. In *EMNLP*.
Thomas Scialom, Benjamin Piwowarski, and Jacopo Staiano. 2019. Self-attention architectures for answer-agnostic neural question generation. In *Proceedings of the 57th annual meeting of the Association for Computational Linguistics*, pages 6027–
6032.
Robert Stalnaker. 2002. Common ground. *Linguistics* and Philosophy, 25(5/6):701–721.
Zheng Zhang, Ryuichi Takanobu, Qi Zhu, MinLie Huang, and XiaoYan Zhu. 2020. Recent advances and challenges in task-oriented dialog systems. *Science China Technological Sciences*, 63(10):2011–
2027.
## A Annotating Guidelines
Here we provide all the guidelines to annotators, for both human question generation and human rating of questions generated by the model. Guidelines for the human annotator task of writing follow-up questions: We depict the guidelines and the examples for the writing followup questions task in Figure 4, and the task design in Figure 5.
Guidelines for the human annotator task of rating follow-up questions: We depict the guidelines of the task of rating the follow-up questions in Figure 6, the examples in Figure 7, and the task design in Figure 8.
## B Annotator Related Information
Annotators were paid by the hour, and recruited as contractors for a variety of annotating projects by our team and related teams. The annotators are all native English speakers (from Canada and the US). They are also aware of the way in which the information will be used. There are no special ethical sensitivities in the collection process and thus it was exempt from an ethics review board.
## C Implementation Details
Question Generation Model: As our question generation model G, we use the T5-xxl model
(Raffel et al., 2020) fine-tuned on SQuAD1.1 (Rajpurkar et al., 2016). We also use beam search and question filtering, similarly to Honovich et al.
(2021, Section 2), see this work for further details.
Question Answering Model: For our question answering model A, we use the T5-xxl model
(Raffel et al., 2020) fine-tuned on SQuAD2.0 (Rajpurkar et al., 2018).
Constituency Parser: We use the Berkeley Neural Parser (Kitaev and Klein, 2018), implemented in the spaCy package.3 SNLI Filtering: We consider the subset of SNLI
with an "entailed" label. Since we are not interested in the case of equivalent hypothesis and premise, we filter out bi-directional entailments using an NLI model (similar to (Honovich et al., 2022)). In the resulting set of one-directional entailments, the information in the premise (TC) is *strictly* greater 3We used spaCy3.0 - https://spacy.io/.
than the information in the hypothesis (TS), which is our case of interest.
## D Computational Resources Details
In terms of computational resources, the project is lightweight, as it required no training at all, and just running inference steps of pre-trained models (question answering, question generation and parsing), all of which run in several minutes on standard GPUs.
## E Gfq Test Released Dataset
We release a benchmarking dataset of 200 examples from SNLI test with a human generated gapfocused question. The data is available here.
Details about the dataset We asked 3 annotators to write questions for each SNLI pair (see guidelines in appendix A) and used a heuristic to select a single GFQ. When selecting this single question our goal is to prefer GFQs where multiple annotators chose to write a question about the same topic. We therefore apply the following heuristic:
for each human written question q we used our question answering model A and define a as the answer to this question given Tc: a = A(Tc, q).
We then count n: the number of annotators which produced questions leading to the same answer a, we look at the questions for which n is maximal and choose a random question from there.
License This data as well as the underlying SNLI data are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License 4.
## F Examples Of Generated Questions
Here we provide examples of questions generated by humans and by the different models we consider.
Table 3 reports questions generated by Step 1, Step 2, Step 3 and Human.
## G Data Related Information
The data collected from annotators contains the manually generated questions and the scoring of generated questions. There are no issues of offensive content or privacy in this data, as it based closely on the SNLI dataset.
## Instructions
In this task, you will be given a certain reference text and a user text which is a partial description of the full content of the reference text.
Your job is to write guiding questions that you would ask the user in order to get the missing pieces of information.
Additional notes to keep in mind:
- Please provide the answer to the questions you write. Note that these answers should be found in the reference text.
- You can decide how many questions to ask in order t questions should be around 2 - 5).
ldeal questions will refer to the user text (eg quote parts of it and should naturally extend it).
See examples in the table below.
| Reference text | User text | Guiding questions |
|-------------------------------------------------------------|-------------------------------------------------------------|----------------------------------------------------------|
| At a street festival, a boy and a man cook | 1. Where are the boy and man cooking? Answer: at a street | |
| A boy and man are | | |
| some sort of | cooking meat. | festival. |
| 2. What kind of meat are they cooking? Answer: "Texas | | |
| "Texas Smoked" meat while pedestrians | | |
| pass by. | Smoked" meat. | |
| 3. What is happening around them while they cook? Answer: | | |
| pedestrians pass by. | | |
| Two men in blue soccer uniforms look like | Soccer players resting. | 1. How many soccer players are there? Answer: two |
| 2. What are the soccer players wearing? Answer: blue soccer | | |
| they are at rest. | uniforms. | |
| A group of men in reflective gear are holding | There are multiple people | 1. What is the gender of the people present? Answer: men |
| light sticks | 3. What are the people wearing? Answer: reflective gear
3. What are the people doing? Answer: holding light sticks | |
| present. | | |
| while standing on a wooden floor that has | | |
| outdoor lighting. | 4. Where are the people? Answer: on a wooden floor that has | |
| outdoor lighting | | |
Figure 4: Human annotator guidelines and examples for the task of writing follow-up questions.
## Task
Please provide the guiding questions in the table below.
Note: You do not need to fill the entire table, in many cases fewer than 6 questions will be enough.
If you feel like more than 6 questions are needed, please provide the extra questions in the free fext box below the fable.
Reference text: Two men climbing on a wooden scaffold.
![8_image_0.png](8_image_0.png)
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
| Followup question | Explanation | |
|------------------------------------------------------|-------------------------------------------------|-------------------------------------------------------|
| Rating | | |
| What is the woman's | Is this a good followup question | Very good question. The student did not mention the |
| for the teacher to ask? Very good | | |
| name? | woman's name. | |
| In addition to singing, | Is this a good followup question | Very good question. It could help the student provide |
| what was the woman. | for the teacher to ask? Very good | the missing part about "playing the guitar". |
| doing? | | |
| What was the woman | Is this a good followup question | Very good question. It could help the student provide |
| singing? | for the teacher to ask? Very good | the missing part about "her favorite song". |
| Ok question. The phrasing isn't natural but it could | | |
| Is this a good followup question | | |
| Singing and what else | help the student provide the missing part about | |
| was the woman doing? | for the teacher to ask? Ok | "playing the guitar". |
| The woman was singing | Is this a good followup question | Very bad question. The answer to the question is |
| her favorite what? | for the teacher to ask? Very bad | obvious even without knowing the complete text. |
| What was the woman | Is this a good followup question | Very bad question. This question is too general and |
| doing? | for the teacher to ask? Very bad | wouldn't help the student provide more information. |
| What was the woman | Is this a good followup question | Very bad question. This question asks about |
| wearing? | for the teacher to ask? Very bad | information which isn't present in the complete text. |
Figure 8: The user interface of the human annotator task of rating follow-up question.
![10_image_0.png](10_image_0.png)
| Source text | A child plays with her father's boots. |
|------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------|
| Student description | A child is playing. |
| Step 1 | What does she do with them? |
| Step 2 | What does the child do with her father's boots? |
| Step 3 | What does the child play with? |
| Human | What is the child playing with? |
| Source text | Two men work outside polishing shoes. |
| Student description | Some men are polishing shoes. |
| Step 1 | What are the two men doing to the shoes? |
| Step 2 | Who works outside to polish shoes? |
| Step 3 | Where do the men work? |
| Human | How many men are there? |
| Source text | A boy dressed in a plaid kilt with a brown hat wields a long pole. |
| Student description | A boy has and object in his hands. |
| Step 1 | Aside from the kilt, what brown item does the boy wearing it wear? |
| Step 2 | What color is the hat the boy is wearing? |
| Step 3 | What type of garment is the boy wearing? |
| Human | What does the boy wear on his body? |
| Source text | A man in a white shirt and baseball hat is pushing a cart carrying several bags on a street. |
| Student description | A man is walking outside. |
| Step 1 | What is the man pushing a cart wearing? |
| Step 2 | Where is the man pushing a cart with bags? |
| Step 3 | What is the man in the picture wearing? |
| Human | What is the man wearing? |
| Table 3: Example GFQs from our different models: Step 1, Step 2, Step 3 and Human. | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section (after Section 6: Conclusions)
✓ A2. Did you discuss any potential risks of your work?
Ethics and Impact section (after Section 6: Conclusions)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5.
✓ B1. Did you cite the creators of artifacts you used?
Provided a citation to the SNLI dataset and SQUAD.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix E.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Appendix E.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Appendix G.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Appendix E.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5.
## C ✓ **Did You Run Computational Experiments?** Section 5.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix C & D
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix C
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 5.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix A.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix B.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix B.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Appendix B.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Appendix B. |
hallinan-etal-2023-detoxifying | Detoxifying Text with {M}a{RC}o: Controllable Revision with Experts and Anti-Experts | https://aclanthology.org/2023.acl-short.21 | Text detoxification has the potential to mitigate the harms of toxicity by rephrasing text to remove offensive meaning, but subtle toxicity remains challenging to tackle. We introduce MaRCo, a detoxification algorithm that combines controllable generation and text rewriting methods using a Product of Experts with autoencoder language models (LMs). MaRCo uses likelihoods under a non-toxic LM (expert) and a toxic LM (anti-expert) to find candidate words to mask and potentially replace. We evaluate our method on several subtle toxicity and microaggressions datasets, and show that it not only outperforms baselines on automatic metrics, but MaRCo{'}s rewrites are preferred 2.1 times more in human evaluation. Its applicability to instances of subtle toxicity is especially promising, demonstrating a path forward for addressing increasingly elusive online hate. |
## Detoxifying Text With Marco: Controllable Revision With Experts And Anti-Experts
Skyler Hallinan♡ Alisa Liu♡ Yejin Choi♡♣ **Maarten Sap**♢♣
♡Paul G. Allen School of Computer Science & Engineering, University of Washington
♣Allen Institute for AI ♢Language Technologies Institute, Carnegie Mellon University [email protected], [email protected]
## Abstract
Text detoxification has the potential to mitigate the harms of toxicity by rephrasing text to remove offensive meaning, but subtle toxicity remains challenging to tackle. We introduce MARCO, a detoxification algorithm that combines controllable generation and text rewriting methods using a Product of Experts with autoencoder language models (LMs). MARCO
uses likelihoods under a non-toxic LM (expert)
and a toxic LM (anti-expert) to find candidate words to mask and replace. We evaluate our method on several subtle toxicity and microaggressions datasets, and show that it not only outperforms baselines on automatic metrics, but MARCO's rewrites are preferred 2.1× more in human evaluation. Its applicability to instances of subtle toxicity is especially promising, demonstrating a path forward for addressing increasingly elusive online hate.
## 1 Introduction
Toxic, offensive, hateful, or biased language is increasingly prevalent and can cause online and offline harms, especially to minority groups (Thomas et al., 2021; OHCHR, 2021). This is challenging for NLP systems to detect and account for when biases are subtle or without explicit toxic keywords
(Hartvigsen et al., 2022; Han and Tsvetkov, 2020; Vidgen et al., 2021). For example, the statement
"*You'll be fine! Just talk like a white person*" conveys the biased implication that non-white dialects are not conducive to success (Figure 1), which is a harmful racial stereotype (Nadal et al., 2014).
Text detoxification, i.e., rewriting text to be less toxic while preserving non-toxic meaning, provides a promising solution by suggesting alternative ways of expressing similar ideas with less biased implications (Nogueira dos Santos et al., 2018). For example, the rewrite "You'll be fine! Just talk like a *good* person" eliminates the racial bias from the original statement while preserving the *non-toxic* meaning. Such methods have the potential to improve 228
![0_image_0.png](0_image_0.png)
the quality of online conversations (e.g., through machine-in-the-loop interfaces; Hohenstein et al.,
2021; Clark et al., 2018).
We present MARCO, Mask and Replace with Context: a new, unsupervised algorithm for text detoxification that combines mask-and-replace text denoising with controllable text generation using a Product of Experts (PoE) (PoE, DEXPERTS; Hinton, 2002; Liu et al., 2021).
MARCO jointly uses an expert and an antiexpert, a pair of language models (LM) fine-tuned on a **non-toxic** and **toxic** corpus respectively, to identify which tokens *most likely* contribute to the overall toxicity, and then suggest replacements that lower toxicity. Using LMs to capture toxicity allows MARCO to rewrite much subtler toxic text compared to previous work that uses toxicity classifiers or toxic word lists (Dale et al., 2021).
We apply MARCO to three datasets focused on subtly toxic statements, such as microaggressions.
Our method outperforms state-of-the-art detoxification baselines from Dale et al. (2021) across all three datasets, as measured through both automatic and human evaluation. Our work shows the effectiveness of combining controllable generation with text rewriting methods for text detoxification.1
## 2 Background: Text Detoxification
Text detoxification is a form of stylistic rewriting (Hu et al., 2017; Shen et al., 2017; Jhamtani et al., 2017) with the goal of producing a non-toxic rewrite given a toxic input sentence. This task is challenging, as it requires both detoxification and preservation of non-toxic meaning, in contrast to controllable text generation, which aims to simply generate any non-toxic continuation for a prompt
(Prabhumoye et al., 2020; Gehman et al., 2020).
Due to a lack of supervision with parallel data, an often effective approach to stylistic rewriting relies on unsupervised masking-and-reconstructing approaches (Li et al., 2018; Wu et al., 2019; Malmi et al., 2020; Ma et al., 2020). In this paradigm, source style-specific tokens/spans in the input text are detected and masked, then filled in with tokens/spans from the target-style using a masked language model. Other work has framed detoxification as a translation or paraphrasing task, using a classifier to steer away from toxic content (Nogueira dos Santos et al., 2018; Dale et al., 2021).
## 3 Text Detoxification With Marco
MARCO is an unsupervised approach to text detoxification, consisting of two discrete steps: **masking**
and then **replacing** tokens, assisted by the *context* of the entire sequence. Though inspired by DEX-PERTS (Liu et al., 2021), our novelty is two-fold:
first, we tackle a more challenging task, unsupervised revision, instead of style-controlled generation, and second, we propose a *detect* and *rewrite* pipeline, in contrast to simple word-distribution steering during autoregressive generation.
Expert and Anti-Expert LMs Our method for unsupervised controlled revision is based on *denoising autoencoder* LMs (AE-LMs), which are trained to mask and reconstruct sequences of text. Our setup consists of a *base* pretrained AE-LM G,
an *expert* AE-LM G+ finetuned on data with desirable attributes, and an *anti-expert* AE-LM G−
finetuned on data with undesirable attributes.
We use BART-base (Lewis et al., 2020) as our base autoencoder. We finetune the expert and antiexpert using 1M non-toxic and 100K overtly toxic comments from the Jigsaw corpus (Do, 2019), as done in Liu et al. (2021) and Dale et al. (2021).
BART can infill multiple or no tokens even if only one token is masked, allowing for more flexible mask infilling. See Appendix A for training details.
## 3.1 Contextual Masking
We first identify locations that *could* convey toxic meaning; intuitively, these could be words or phrases with strongly differing likelihoods under the expert and anti-expert.
Formally, given a sequence w, for every token wi ∈ w, we temporarily mask it and generate probability distributions over the vocabulary V for that location from G+ and G−, which we denote P
+
and P− respectively. Then, we compute the distance di between P
+ and P− using the JensenShannon divergence, a symmetric form of the Kullback–Leibler (KL) divergence:2
$$d_{i}=\frac{1}{2}\left(D_{\mathrm{KL}}(P^{+}\|P^{-})\right)+\frac{1}{2}\left(D_{\mathrm{KL}}(P^{-}\|P^{+})\right)$$
After normalizing all distances by the mean, we mask all wi whose distance diis above a threshold τ and denote the resulting sequence wm; these masked tokens are locations where toxicity may be present due to expert and anti-expert disagreement.
## 3.2 Contextual Replacing
After masking potentially toxic locations, MARCO
then replaces them with more benign tokens - if 2Given probability distributions A and B, the KL divergence is defined as DKL(A∥B) = P
x∈V
A(x) log A(x)
B(x)
| Validation | Test | | | | | |
|--------------|--------------|---------------|-------------|--------------|---------------|-------------|
| Method | Toxicity (↓) | BERTScore (↑) | Fluency (↓) | Toxicity (↓) | BERTScore (↑) | Fluency (↓) |
| Original | 0.286 | - | 51.49 | 0.272 | - | 70.20 |
| CondBERT | 0.161 | 0.966 | 104.10 | 0.148 | 0.964 | 88.69 |
| ParaGeDi | 0.162 | 0.931 | 104.46 | 0.172 | 0.929 | 120.78 |
| MARCO | 0.145 | 0.958 | 43.54 | 0.141 | 0.954 | 39.10 |
| Original | 0.351 | - | 58.46 | 0.344 | - | 88.79 |
| CondBERT | 0.202 | 0.961 | 69.51 | 0.190 | 0.961 | 131.12 |
| ParaGeDi | 0.186 | 0.921 | 179.88 | 0.192 | 0.923 | 99.96 |
| MARCO | 0.176 | 0.947 | 54.86 | 0.186 | 0.946 | 48.75 |
| Original | 0.563 | - | 205.73 | 0.578 | - | 220.42 |
| CondBERT | 0.288 | 0.954 | 190.51 | 0.293 | 0.950 | 200.20 |
| ParaGeDi | 0.332 | 0.918 | 217.78 | 0.323 | 0.912 | 240.17 |
| MARCO | 0.274 | 0.939 | 110.50 | 0.277 | 0.936 | 128.84 |
they are indeed toxic - to autoregressively produce a rewrite g given the original and masked sentences w and wm. We transform the DEXPERTS (Liu et al., 2021) framework, which leverages a PoE
to steer a model away from toxic generations by ensembling token probabilities, to enable rewriting by using AE-LMs.
We obtain the next-token unnormalized logprobabilities (i.e., logits) zi, z
+
i
, and z
−
ifrom the base and expert AE-LMs G, G+, and G−, respectively, conditioned on the previously generated tokens g<i, the original sequence w, and the masked variant wm. We then ensemble those logits into a modified next-token probability distribution:
P(Xi| g<i, w, wm) = softmax(zi + α1z
$$\operatorname{ax}(z_{i}+\alpha_{1}z_{i}^{+}-\alpha_{2}$$
−
i
)
where Xiis a random variable over the vocabulary V representing the next token at index i given the previous generation g<i, and our two hyperparameters α1 and α2 independently control the impact of the expert and anti-expert for more flexibility.3 In our method, the expert and anti-expert use the masked sequence wm as their input, while the base model uses the unmasked w. Intuitively, the base model tries to replicate the input sequence but is steered by an expert and anti-expert with contrasting probability distributions at the masked locations. This enables rewrites with minimal but meaningful edits on toxic tokens and preservation of non-toxic content. Note that for a masked location, when the base model agrees more with the 3Appendix E gives further intuition into understanding this equation as a PoE.
anti-expert than with the expert, the original token is most likely toxic and will be replaced in the rewrite. On the other hand, if the differences between the expert and anti-expert are not enough to sway the base model, the original token is most likely non-toxic and will be re-added in the rewrite.
## 4 Detoxification Experiments & Results
In our experiments, we focus on rewriting sentences from three toxicity datasets, and use both automatic and human evaluations to measure MARCO's performance at detoxifying text.
## 4.1 Datasets
| MAgr SBF Dyna Hate |
|----------------------|
We seek to rewrite English sentences that are already known to be or annotated as toxic, especially sentences that contain more subtle or implicit biases (e.g., without swearwords). In contrast to the Jigsaw corpus used to finetune our experts, we use three out-of-domain datasets with subtle toxicity:
Microagressions.com (MAgr) is a publicly available Tumblr blog where users can anonymously post about socially-biased interactions and utterances in the wild. Each post includes an offending quote and/or a description of the incident. We scrape all *quotes*, resulting in a set of real-world microagression utterances. The validation and test set sizes are 238 and 298 respectively.
Social Bias Frames (SBF; Sap et al., 2020) is a corpus of socially biased and offensive content from various online sources. We use a subset of
![3_image_0.png](3_image_0.png)
SBF from the microaggressions subreddit,4 which contains subtly biased content (Breitfeller et al.,
2019). We use all posts where the majority of annotators marked the text as offensive. The validation and test set sizes are 92 and 114 respectively.
DynaHate (Vidgen et al., 2021) is an adversarially collected set of hate speech, where human annotators create examples that an iteratively improved hate-speech classifier cannot detect. We utilize all four rounds of hate-speech data and use all examples marked as hateful. The validation and test set sizes are 1,858 and 2,011 respectively.
## 4.2 Baselines
We compare MARCO to the two baseline approaches from Dale et al. (2021), which have shown state-of-the-art detoxification performance.
See Appendix B for generation details.
ParaGeDi utilizes a class-conditioned language model (using control codes for toxic and non-toxic styles) on top of a paraphrasing language model to steer generated text towards a specific attribute.
CondBERT follows a pointwise editing setup, first identifying tokens to mask in the input, then using a mask-filling model to replace them. In contrast to MARCO, CondBERT uses a lexicon-based approach to masking words by using weights from a whole-word, toxic language logistic classifier.
## 4.3 Evaluation Setup
We perform automatic and human evaluations, following previous work.
| Original | ...because ain't nobody got time to pick all that cotton. |
|------------|--------------------------------------------------------------------------|
| MARCO | ...because ain't nobody got time to pick all that up. |
| CondBERT | ... because ain't nobody got time to pick all that cotton. |
| ParaGeDi | Because nobody has time to pick up all the cotton. |
| Original | Ugh, I hate living here. There's so many black people around. |
| MARCO | Ugh, I hate living here. There's so many people around. |
| CondBERT | "well, i could be living here. there's so many other nice people around. |
| ParaGeDi | I've been living here for a long time. |
| Original | I wouldn't eat food that has been touched by a m*ng |
| MARCO | I wouldn't eat food that has been touched by a human being. |
| CondBERT | i wouldn't be eating food that has been touched by a m*ng |
| ParaGeDi | I would not eat food touched by a monk. |
Automatic Metrics We assess the quality of the models' rewrites with automatic metrics used in previous work (Liu et al., 2021; Ma et al., 2020).
We report the average **toxicity** score of rewrites using the PerspectiveAPI.5 Additionally, we measure fluency of rewrites by computing their perplexity with an external LM (GPT-2 XL; Radford et al.,
2019), and **meaning similarity** between the input and the rewrite using BERTScore (Zhang et al.,
2019). See Appendix B.3 for further details.
Human Evaluation We conduct a head-to-head human evaluation (Kiritchenko and Mohammad, 2017) of the toxicity of the rewrites using Amazon Mechanical Turk. For each dataset's validation and test sets, we sample 75 prompts each, then compare each pair of MARCO, ParaGeDi and CondBERT's generations against each other and ask which one is less toxic (along with an option to flag either of the rewrites as ungrammatical or disfluent). In our evaluation, we obtained head-to-head judgments from three workers per rewrite pair; workers agreed 5www.perspectiveapi.org, accessed 06-2022.
moderately, with a Cohen's κ=0.575 on average.
See Appendix D for details (e.g., MTurk interface).
## 4.4 Results
Automatic metrics (Table 1) show that MARCO
is better at detoxification than baselines across all datasets and splits by 10.3% on average. Human evaluations corroborate this (Figure 2), as MARCO
is on average rated as less toxic than CondBERT
2.2 times more often than vice versa across datasets and splits, and 1.9 times more often vs. ParaGeDi.
In terms of meaning preservation as measured by BERTScore, MARCO is on par with CondBERT,
with an average score within 2.5% across datasets.
However, BERTScore does not measure meaning preservation of only non-toxic content; removing toxic meaning *by definition* requires trade-offs between fluency, style accuracy, and meaning preservation as discussed in most style transfer work
(Dale et al., 2021; Laugier et al., 2021; Malmi et al.,
2020; Ma et al., 2020; Krishna et al., 2020, i.a.).
Compared to DynaHate, MARCO's margin of winning is even larger on MAgr and SBF, which contain more subtle toxicity. For instance, in the first example from Table 2, the subtle reference to cotton picking and slavery is corrected by MARCO,
which replaces "*cotton*" with "up"; in contrast, both baselines fail to revise the toxic content.6 Since all three methods learned toxicity using the same overtly toxic data from Jigsaw, the fact that MARCO deals especially well with subtle toxicity highlights the advantages of using LMs to better model and capture toxicity patterns.
Finally, MARCO's rewrites were more fluent than other methods, according to both automatic metrics and human evaluation. MARCO's rewrites were deemed as ungrammatical the least amount of the time (9.3%), versus 9.7% for CondBERT and 11.7% for ParaGeDi.
## 5 Conclusion
We present MARCO, a novel method for text detoxification, which utilizes auto-encoder language model experts in a mask and reconstruct process.
Our method outperforms strong baselines in automatic and human evaluations, showing strong ability to detoxify even subtle biases. MARCO's success demonstrates the effectiveness of controllable generation mixed with text rewriting methods for controllable revision, and highlights the usefulness of using LMs for capturing toxicity.
## Limitations, Ethical Considerations, And Broader Impacts
Despite the promising performance of MARCO
at detoxifying text, there are several limitations, ethical considerations, and broader impacts of our approach, which we list below.
First, in this work, we seek to *detoxify* sentences.
However, toxicity itself is a subjective and sensitive concept with large potential downstream impacts caused by annotator and subsequent model biases (Sap et al., 2022). We somewhat mitigate this variation by selecting human evaluators that scored highly on a toxicity qualification task (see Appendix D), in line with a prescriptive paradigm of toxicity annotation (Rottger et al., 2022). Future work could investigate the effect of demographics on preference for different rewriting algorithms, e.g., in a more descriptive paradigm.
In addition, achieving meaningful semantic preservation in detoxification is challenging.
Specifically, it is difficult to disentangle the toxic and non-toxic meanings from the input, making it challenging to generate detoxified rewrites with high preservation of only the non-toxic content; this may risk minimizing marginalized groups' speech
(Xu et al., 2021). Partially, this could be due to a lack of context incorporation (social, conversational, preceding sentences; Yerukola et al., 2023); future work should consider adapting detoxification methods in context (Cheng et al., 2020; Roy et al., 2023).
MARCO also requires finetuning two pretrained LMs, which is not computationally insignificant
(Strubell et al., 2019; Schwartz et al., 2020). Future work could explore using smaller LMs to control a larger model (Liu et al., 2021), or even more lightweight approaches.
Additionally, we acknowledge that in the evaluation, we expose Turkers to toxic content, which might harm individuals, especially those with identities that the offensive content applies to (Roberts, 2017; Steiger et al., 2021). However, we pay a fair wage (US$8/h) and our work is approved by our institution's ethics review board (IRB). See Appendix D for further details.
Another major ethical implication of our work is that, following previous work, we use the Perspective API to automatically assess toxicity, a classifier which contains documented biases (e.g., demographic biases and racial biases; Dixon et al., 2018; Sap et al., 2019). Future research could consider different, more holistic views of toxicity and biases
(e.g., Sap et al., 2020).
Finally, although our application in this paper is detoxification, we acknowledge that MARCO
could be applied for the opposite purpose, ie., generation of toxic text from non-toxic text; this is a malicious application which we condemn. Although this issue is more prevalent for controlled generation methods (McGuffie and Newhouse, 2020), this is still a risk MARCO faces. In a similar vein, we do not endorse using the toxicity or microaggression datasets to develop models to generate more toxicity or microaggressions, as this may incur harm, especially to marginalized/vulnerable populations.
## References
Luke Breitfeller, Emily Ahn, David Jurgens, and Yulia Tsvetkov. 2019. Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 1664–1674, Hong Kong, China. Association for Computational Linguistics.
Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, and Jingjing Liu. 2020. Contextual text style transfer. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2915–
2924.
Elizabeth Clark, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A. Smith. 2018. Creative writing with a machine in the loop: Case studies on slogans and stories. In *23rd International Conference on Intelligent User Interfaces*, IUI '18, page 329–340, New York, NY, USA. Association for Computing Machinery.
Nicola Clark. 2011. Ricky gervais, please stop using the word 'mong'. *The Gardian*. Accessed 2023-05-25.
David Dale, Anton Voronov, Daryna Dementieva, Varvara Logacheva, Olga Kozlova, Nikita Semenov, and Alexander Panchenko. 2021. Text detoxification using large pre-trained neural models. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7979–7996, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification.
Quan H Do. 2019. Jigsaw unintended bias in toxicity classification.
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356–3369, Online. Association for Computational Linguistics.
Xiaochuang Han and Yulia Tsvetkov. 2020. Fortifying toxic speech detectors against veiled toxicity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7732–7739, Online. Association for Computational Linguistics.
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022.
Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection.
Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. *Neural Comput.*, 14(8):1771–1800.
Jess Hohenstein, Dominic DiFranzo, Rene F Kizilcec, Zhila Aghajari, Hannah Mieczkowski, Karen Levy, Mor Naaman, Jeff Hancock, and Malte Jung. 2021.
Artificial intelligence in communication impacts language and social relationships.
Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In *Proceedings of the 34th* International Conference on Machine Learning - Volume 70, ICML'17, page 1587–1596. JMLR.org.
Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence to sequence models.
In *Proceedings of the Workshop on Stylistic Variation*, pages 10–19, Copenhagen, Denmark. Association for Computational Linguistics.
Svetlana Kiritchenko and Saif Mohammad. 2017. Bestworst scaling more reliable than rating scales: A case study on sentiment intensity annotation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
pages 465–470, Vancouver, Canada. Association for Computational Linguistics.
Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020.
Reformulating unsupervised style transfer as paraphrase generation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 737–762, Online. Association for Computational Linguistics.
Léo Laugier, John Pavlopoulos, Jeffrey Sorensen, and Lucas Dixon. 2021. Civil rephrases of toxic texts with self-supervised transformers. In *Proceedings* of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main
Volume, pages 1442–1461, Online. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Juncen Li, Robin Jia, He He, and Percy Liang. 2018.
Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers),
pages 1865–1874, New Orleans, Louisiana. Association for Computational Linguistics.
Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-time controlled text generation with experts and anti-experts.
In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691–6706, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692.
Xinyao Ma, Maarten Sap, Hannah Rashkin, and Yejin Choi. 2020. PowerTransformer: Unsupervised controllable revision for biased language correction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7426–7441, Online. Association for Computational Linguistics.
Eric Malmi, Aliaksei Severyn, and Sascha Rothe. 2020.
Unsupervised text style transfer with padded masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8671–8680, Online. Association for Computational Linguistics.
Kris McGuffie and Alex Newhouse. 2020. The radicalization risks of GPT-3 and advanced neural language models. *CoRR*, abs/2009.06807.
Kevin L. Nadal, Katie E. Griffin, Yinglee Wong, Sahran Hamit, and Morgan Rasmus. 2014. The impact of racial microaggressions on mental health: Counseling implications for clients of color. *Journal of Counseling & Development*, 92(1):57–66.
Cicero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 189–194, Melbourne, Australia. Association for Computational Linguistics.
OHCHR. 2021. Report: Online hate increasing against minorities, says expert. Technical report.
Shrimai Prabhumoye, Alan W Black, and Ruslan Salakhutdinov. 2020. Exploring controllable text generation techniques. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1–14, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Sarah T Roberts. 2017. Social media's silent filter. The Atlantic.
Paul Rottger, Bertie Vidgen, Dirk Hovy, and Janet Pierrehumbert. 2022. Two contrasting data annotation paradigms for subjective NLP tasks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 175–190, Seattle, United States. Association for Computational Linguistics.
Shamik Roy, Raphael Shu, Nikolaos Pappas, Elman Mansimov, Yi Zhang, Saab Mansour, and Dan Roth.
2023. Conversation style transfer using few-shot learning. *arXiv preprint arXiv:2302.08362*.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics.
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5477–5490, Online. Association for Computational Linguistics.
Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah Smith. 2022. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 5884–5906, Seattle, United States. Association for Computational Linguistics.
Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. Green ai. *Commun. ACM*,
63(12):54–63.
Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In *Proceedings of the 31st International Conference on Neural Information Processing Systems*, NIPS'17, page 6833–6844, Red Hook, NY, USA. Curran Associates Inc.
Miriah Steiger, Timir J Bharucha, Sukrit Venkatagiri, Martin J. Riedl, and Matthew Lease. 2021. The psychological well-being of content moderators: The emotional labor of commercial moderation and avenues for improving support. In *Proceedings of the* 2021 CHI Conference on Human Factors in Computing Systems, CHI '21, New York, NY, USA. Association for Computing Machinery.
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp.
Kurt Thomas, Devdatta Akhawe, Michael Bailey, Dan Boneh, Elie Bursztein, Sunny Consolvo, Nicola Dell, Zakir Durumeric, Patrick Gage Kelley, Deepak Kumar, Damon McCoy, Sarah Meiklejohn, Thomas Ristenpart, and Gianluca Stringhini. 2021. Sok: Hate, harassment, and the changing landscape of online abuse. In 2021 IEEE Symposium on Security and Privacy (SP), pages 247–267.
Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2021. Learning from the worst: Dynamically generated datasets to improve online hate detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 1667–1682, Online. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Mask and infill: Applying masked language model for sentiment transfer. In *Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19*, pages 5271–5277. International Joint Conferences on Artificial Intelligence Organization.
Albert Xu, Eshaan Pathak, Eric Wallace, Suchin Gururangan, Maarten Sap, and Dan Klein. 2021. Detoxifying language models risks marginalizing minority voices. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2390–2397, Online. Association for Computational Linguistics.
Akhila Yerukola, Xuhui Zhou, and Maarten Sap. 2023.
"don't take this out of context!" on the need for contextual models and evaluations for stylistic rewriting.
arXiv.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2019. Bertscore:
Evaluating text generation with BERT. *CoRR*,
abs/1904.09675.
## A Modeling Details A.1 Out-Of-The-Box Modeling
We use the HuggingFace Transformers library
(Wolf et al., 2020) version 4.10.2 for out-of-thebox, pretrained BART models and for finetuning using the Trainer class. It is licensed under the Apache License 2.0., and the code is available at https://github.com/huggingface/transformers.
## A.2 Finetuning The Experts
For the expert and anti-expert models, we further finetune the base BART model with 139M parameters, found at https://huggingface.co/facebook/bartbase and licensed under the Apache License 2.0, with the non-toxic and toxic corpus respectively.
We use the same pretraining procedure used to further fintune BART (Lewis et al., 2020), and randomly corrupt sequences during training, which aligns with BART's intended use.
Training Corpus We use the Jigsaw Unintended Bias in Toxicity Classification (Do, 2019) dataset for finetuning our expert and antiexpert, a corpus of forum comments on news articles. Each comment has five binary annotations on if it is toxic or not.
We mark all sequences with no toxic annotations as non-toxic, and all sequences with more than 50% toxic annotations as *toxic*. The intended use of this dataset is to help minimize unintended model bias, which we follow in this work. Finally, we sample 100 instances from the validation set, and find the only individuals mentioned in Jigsaw are high-profile political figures who are already wellknown. We do not perform additional anonymization of the data.
Expert We finetune the expert with the hyperparameters listed in Table 3, using two NVIDIA
RTX6000 GPUs. We select the best checkpoint, based on the lowest evaluation loss, which is at step 100,000. The total training time is 20 hours, for 40 GPU hours of usage.
| Hyperparameter | Assignment |
|-----------------------------|-----------------------|
| model | BART-base |
| number of gpus | 2 |
| effective batch size | 48 |
| total steps | 100,000 |
| steps per evaluation | 1000 |
| learning rate optimizer | AdamW |
| AdamW initial learning rate | 2.5e-06 |
| AdamW epsilon | 1e-06 |
| learning rate schedule | linear with no warmup |
| weight decay | 0.0 |
| max sequence length | 180 |
| max generation length | 230 |
| padding sequences | to max seq length |
Table 3: Hyperparameters used to finetune the expert model Anti-Expert We finetune the anti-expert with the hyperparameters listed in Table 4, using a single NVIDIA RTX6000 GPU. We select the best checkpoint, based on the lowest evaluation loss, which is at step 38,000. The total training time is 2 hours, for 2 GPU hours of usage.
| Hyperparameter | Assignment |
|-----------------------------|-----------------------|
| model | BART-base |
| number of gpus | 1 |
| effective batch size | 32 |
| total steps | 50,000 |
| steps per evaluation | 1000 |
| learning rate optimizer | AdamW |
| AdamW initial learning rate | 1e-06 |
| AdamW epsilon | 1e-06 |
| learning rate schedule | linear with no warmup |
| weight decay | 0.0 |
| max sequence length | 180 |
| max generation length | 230 |
| padding sequences | to max seq length |
Table 4: Hyperparameters used to finetune the antiexpert model
## B Experimental Details B.1 Datasets
For each dataset, we manually sample and review 75 examples from the validation set, and search for any information that names or uniquely identifies individual people. We find no examples and perform no further anonymization. In addition, we follow the intended use of all three datasets by using them only to rewrite toxic sentences.
We also preprocess each of the datasets in the same way. We use the re package built-in to Python (we use version 3.8.11) to remove any extended white space, including tabs and line breaks, and convert them to one space. We use the html package, also built-in to our Python version, to convert named html character references to their corresponding string, such as ">" to ''>". Afterwards, we use the ftfy package, version 6.1.1, found at https://pypi.org/project/ftfy/ to fix broken unicode in text. Finally, we remove any very long sequences: we calculate the 90% percentile of text lengths to be 44, where text length is the number of space-delimited words, and we remove any sequences longer than this.
MAgr We scrape all quotes from posts using the Tumblr API, following the API License Agreement at https://www.tumblr.com/docs/en/api_agreement, which grants the right to use, distribute, display, and modify posted Tumblr content.
SBF There is no license for this dataset. DynaHate There is no license for this dataset.
## B.2 Generation Details
Generations are performed using a single NVIDIA RTX6000 GPU for all datasets and methods.
## Marco
Masking Hyperparameters We set a masking threshold of τ = 1.2 for all experiments.
Generation Hyperparameters We generate with greedy search for all datasets with a max generation length of 128.
MAgr We perform a search jointly over different hyperparameter values on the development set. We choose the hyperparameter combination that performs best on automatic metrics, shown in Table 5, and use this to generate on the test set.
| Hyperparameter | Tested | Assignment |
|--------------------------|-----------------------|--------------|
| repetition penalty | [1.0, 1.2, 1.5] | 1.0 |
| α1 | [0, 0.5, 1.0, 1.5] | 1.5 |
| α2 | [3.0, 3.25, ..., 5.0] | 4.25 |
| temperature (base model) | [0.9, 1.3, ..., 2.9] | 2.5 |
Table 5: Hyperparameters tested and used for MARCO
on MAgr In total, we sweep over 3 × 4 × 9 × 6 = 648 hyperparameter combinations before choosing a best set to run on our test set. Including this search, we perform approximately 150,000 rewrites. Since 100 generations take about 30 seconds, we use approximately 12.5 GPU hours.
| Hyperparameter | Tested | Assignment |
|--------------------------|-----------------------|--------------|
| repetition penalty | [1.0, 1.2, 1.5] | 1.5 |
| α1 | [0, 0.5, 1.0, 1.5] | 1.5 |
| α2 | [3.0, 3.25, ..., 5.0] | 5.0 |
| temperature (base model) | [0.9, 1.3, ..., 2.9] | 2.9 |
SBF We perform a search jointly over different hyperparameter values on the development set. We choose the hyperparameter combination that performs best on automatic metrics, shown in Table 6, and use this to generate on the test set.
Table 6: Hyperparameters tested and used for MARCO
on SBF
| Hyperparameter | Tested | Assignment |
|--------------------------|-----------------------|--------------|
| repetition penalty | [1.0, 1.2, 1.5] | 1.0 |
| α1 | [0.5, 1.0, 1.5] | 1.5 |
| α2 | [4.0, 4.25, ..., 5.0] | 4.75 |
| temperature (base model) | [0.9, 1.7, 2.5] | 2.5 |
As above, we go over 648 hyperparameter combinations before choosing a best set to run on our test set. In total, we rewrite approximately 65,000 sequences. Since 100 generations take about 30 seconds, we use approximately 5.4 GPU hours.
DynaHate We perform a search jointly over different hyperparameter values on the development set. We choose the hyperparameter combination that performs best on automatic metrics, shown in Table 7, and use this to generate on the test set.
Table 7: Hyperparameters tested and used for MARCO
on DynaHate We iterate over a smaller 3 × 3 × 5 × 3 = 135 hyperparameter combinations, due to dataset size, before choosing a final set to use on our test set. In total, we rewrite approximately 240,000 texts.
Since 100 generations take about 30 seconds, we use approximately 20 GPU hours.
Baselines Both of our baselines are available on https://github.com/s-nlp/detox as Jupyter Notebooks. We adapt them to Python files, runnable via the command line. There is no license available.
CondBERT We perform a brief hyperparameter search and try two different values for the CondBERT "number of substitute words" hyperparameter on each validation dataset. We choose the hyperparameter that performs best on automatic metrics, given in Table 8, and use this to generate on the test sets. See Dale et al. (2021) for a detailed description of the hyperparameter.
Table 8: Hyperparameters tested and used for CondBERT
Including our hyperparameter search, we run approximately 7000 rewrites across all datasets and splits. Given that 100 generations take approximately 30 seconds, our usage is 0.6 GPU hours.
CondBERT uses BERT-base, which includes 110M parameters.
ParaGeDi We use greedy decoding for ParaGeDi and use the same hyperparameters as MARCO for each dataset, for fair comparison. Table 9 lists the sole ParaGedi-specific hyperparameter we modify: we do not generate and rerank multiple sequences for fairness.
Table 9: Hyperparameters used for ParaGeDi
| Hyperparameter | Tested | Assignment |
|----------------------------|----------|--------------|
| number of substitute words | 1,10 | 1 |
We perform approximately 5000 rewrites across all datasets and splits. Given that 100 generations take approximately one minute, our usage is 0.8 GPU hours.
ParaGedi uses T5-base as a paraphrasing model, with 220M parameters, in conjunction with a finetuned GPT2-medium discriminator, with 355M parameters.
| Hyperparameter | Assignment |
|-----------------------------------|--------------|
| generate multiple seqs and rerank | false |
## B.3 Evaluation Metrics
Toxicity To evaluate toxicity, we use the Perspective API, a publicly hosted toxicity classifier trained on the Jigsaw corpus. Given a text, the model outputs a scalar toxicity score between 0 and 1 inclusive. The model, which is located at https://www.perspectiveapi.com/, is continually updated and may change output over time. We query it in June, 2022, following the API Terms of Service and intended use at https://developers.google.com/terms/.
Fluency We assess fluency by calculating the perplexity of a text with an external, pretrained language model. We use GPT2-base (Radford et al.,
2019), found at https://huggingface.co/gpt2, with 117M parameters, and use it under the MIT license and its intended use.
We run this metric with a single NVIDIA
RTX6000 GPU, which takes approximately 5 seconds per 100 examples. With an estimate of 450,000 texts processed, our usage for this metric is 6.3 GPU hours.
Meaning Preservation We use BERTScore
(Zhang et al., 2019), which outputs the cosine distance between model sentence embeddings, to measure the meaning similarity between the original sentence and the rewrite. We use RoBERTa-large (Liu et al., 2019) as our model, which has 354M parameters. We use the code located at https://huggingface.co/spaces/evaluatemetric/bertscore under the MIT License and its intended use.
We run this evaluation with a single NVIDIA
RTX6000 GPU, which takes approximately 15 seconds per 100 examples. With an estimate of 450,000 texts processed, our usage for this metric is 18.7 GPU hours.
## B.4 Total Computational Budget
Summing up our computational usage from the above sections, including finetuning the experts, our total computational budget is 106.1 GPU hours.
## C Example Rewrites
Table 10 shows example generations from each method across all three datasets.
## D Human Evaluation Details
We use annotators from the USA and Canada on Amazon Mechanical Turk, who voluntarily opt-in to the task. Our task was approved by our institution's ethics review board (IRB). A screenshot of our interface for the human evaluation is shown in Figure 3. Our interface describes how the annotators' data will be used.
To gather annotations, we first recruit workers to do a qualification task, where annotators must answer six questions on which rewrite from a pair is less toxic, the same question as in our main human evaluation. The interface for this is the same as our main task shown in Figure 3, but with six sentences instead of one. Annotators who answer at least five out of six questions correctly are approved and can work on the main task. We list the six examples and correct answers in Table 11.
We paid a median wage of $8/h for the qualification and the main task, which is above the minimum wage and a fair value for USA and Canada.
## E Decoding With Product Of Experts
Hinton (2002) introduce the Product of Experts
(PoE), an equation that states given n experts:
$$p(d|\theta_{1},...,\theta_{n})={\frac{\prod_{m}p_{m}(d|\theta_{m})}{\sum_{c}\prod_{m}p_{m}(c|\theta_{m})}}\quad\quad(1)$$
where θm denotes the parameters of model m, d is some data vector, pm(d|θm) denotes the probability of d under model m, and c iterates over all possible data vectors.
Applying the PoE to autoregressive generation equation, d represents a single token, pm(d|θm)
represents the next token-probability of d under a specific model, and c iterates over all tokens in the vocabulary V.
Given a vector x, the softmax equation is:
$${\mathrm{softmax}}(x_{i})={\frac{e^{x_{i}}}{\sum_{j=1}^{K}e^{x_{j}}}}\ {\mathrm{~for~}}i=1,2,\ldots,K$$
In the replacing step of MARCO, we perform the following ensembling of unnormalized logprobabilities (i.e., logits) zi, z
+
i
, and z
−
ifrom the base and expert AE-LMs G, G+, and G−, respectively, conditioned on the previously generated tokens g<i, the original sequence w, and the masked variant wm to obtain the next-token probability:
$$\mathbf{r}$$
P(Xi| g<i, w, wm) = softmax(zi + α1z
+
i − α2z
−
i
)
Consider a single index j corresponding to token Vj in the softmax output. Given that each of zi, z
+
i
,
and z
−
iare *log-space* probability vectors over V ,
the exponentiation of zi +α1z
+
i −α2z
−
iin the softmax equation converts this sum back to a product in probability space, which matches the PoE numerator for Vj . In addition, the denominator in the softmax equation transforms the logit sum exactly to the PoE denominator for Vj . Thus, each probability from the output distribution from MARCO
exactly matches Equation 1 and is thus a PoE.
| MAgr SBF DynaHate |
|---------------------|
| Original | ugh exams suck :'( #thisisgayerthanaids |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------|
| CondBERT | 'the final exams in english language:' (# thisisgayerthanaids |
| ParaGeDi | ugh exams - '(#thisisgayerthanaids |
| MARCO: | ugh exams suck :'( |
| Original | Your brother is such a girl! |
| CondBERT | my older brother is such a lucky one! |
| ParaGeDi | the brother's great! |
| MARCO: | Your brother is such a great guy! |
| Original | Ugh, I hate living here. There's so many black people around. |
| CondBERT | "well, i could be living here. there's so many other nice people around. |
| ParaGeDi | I've been living here for a long time. |
| MARCO: | Ugh, I hate living here. There's so many people around. |
| Original | What's the difference? All these Orientals are the same. |
| CondBERT | what's the difference? all these orientals are the same. |
| ParaGeDi | all Orientals have the same. |
| MARCO: | What's the difference? All of these things are the same. |
| Original | no fats no fems |
| CondBERT | no demons, no demons. |
| ParaGeDi | No fatty, no fats |
| MARCO: | no fats no additives |
| Original | ...because ain't nobody got time to pick all that cotton. |
| CondBERT | ... because ain't nobody got time to pick all that cotton. |
| ParaGeDi | Because nobody has time to pick up all the cotton. |
| MARCO: | ...because ain't nobody got time to pick all that up. |
| Original | I wouldn't eat food that has been touched by a mong |
| CondBERT | i wouldn't be eating food that has been touched by a mong |
| ParaGeDi | I would not eat food touched by a monk. |
| MARCO: | I wouldn't eat food that has been touched by a human being. |
| Original | I just don't like gays and that is my choice |
| CondBERT | i just don't like them, and that is my choice |
| ParaGeDi | I just don't like the Gays. |
| MARCO: | I just don't like the idea and that is my choice |
| Original | It is totally valid to say that black people are inferior |
| CondBERT | it is totally valid to say that many of these people are inferior |
| ParaGeDi | it's entirely true that the Black is a minority |
| MARCO: | It is totally fair to say that some people are wrong. |
| Table 10: Examples of rewrites from each method on all datasets' validation sets | |
Participation **restriction**: providers/turkers for this task cannot currently be employed by or a student at the University of *Washington.*
![12_image_0.png](12_image_0.png)
| Original | i think if people don't succeed in this society, it's because they didn't work hard enough. |
|------------|-----------------------------------------------------------------------------------------------|
| Rewrite A | if people don't succeed in this society, it's because they didn't work enough. |
| Rewrite B | i think if people don't succeed in this society, it's because they didn't get lucky enough. |
| Original | why can't we interview you? we're trying to do something for your people. |
| Rewrite A | why can't we do something for you? we're trying to do something for you. |
| Rewrite B | we're trying to do something for your people. |
| Original | i don't get it why are pregnant women working |
| Rewrite A | i don't get it why are you not? |
| Rewrite B | i don't understand why pregnant women work. |
| Original | hey honey, you're going the wrong direction, the border's that way! |
| Rewrite A | hmm... hey, honey, it's going the wrong way, the border's that way! |
| Rewrite B | hey, you're going the wrong direction, the one that is! |
| Original | When I look at you, I don't see color |
| Rewrite A | I don't see color when I look at you. |
| Rewrite B | When I look at you, I get distracted. |
| Original | You've been doing well. It's so inspiring that you've overcome your disability! |
| Rewrite A | You've been doing decently. It's so exciting! |
| Rewrite B | You've been doing well with your disability. It's so inspiring! |
Table 11: The six rewrite examples used in the detoxification qualification task for workers on MTurk. The less toxic, correct rewrites are listed in green.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
meister-etal-2023-natural | A Natural Bias for Language Generation Models | https://aclanthology.org/2023.acl-short.22 | After just a few hundred training updates, a standard probabilistic model for language generation has likely not yet learnt many semantic or syntactic rules of natural language, making it difficult to estimate the probability distribution over next tokens. Yet around this point, these models have identified a simple, loss-minimising behaviour: to output the unigram distribution of the target training corpus. The use of such a heuristic raises the question: Can we initialise our models with this behaviour and save precious compute resources and model capacity? Here we show that we can effectively endow standard neural language generation models with a separate module that reflects unigram frequency statistics as prior knowledge, simply by initialising the bias term in a model{'}s final linear layer with the log-unigram distribution. We use neural machine translation as a test bed for this simple technique and observe that it: (i) improves learning efficiency; (ii) achieves better overall performance; and perhaps most importantly (iii) appears to disentangle strong frequency effects by encouraging the model to specialise in non-frequency-related aspects of language. | # A Natural Bias For Language Generation Models
Clara Meister∗,1, Wojciech Stokowiec2**, Tiago Pimentel**3, Lei Yu2, Laura Rimell2**, Adhiguna Kuncoro**2 1ETH Zürich 2DeepMind 3University of Cambridge [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
## Abstract
After just a few hundred training updates, a standard probabilistic model for language generation has likely not yet learnt many semantic or syntactic rules of natural language, making it difficult to estimate the probability distribution over next tokens. Yet around this point, these models have identified a simple, loss-minimising behaviour: to output the unigram distribution of the target training corpus.
The use of such a heuristic raises the question: Can we initialise our models with this behaviour and save precious compute resources and model capacity? Here we show that we can effectively endow standard neural language generation models with a separate *module* that reflects unigram frequency statistics as *prior knowledge*, simply by initialising the bias term in a model's final linear layer with the log-unigram distribution. We use neural machine translation as a test bed for this simple technique and observe that it: (i) improves learning efficiency; (ii) achieves better overall performance; and perhaps most importantly
(iii) appears to disentangle strong frequency effects by encouraging the model to specialise in non-frequency-related aspects of language.
## 1 Introduction
Consider the structure of a number of core tasks in natural language processing (NLP): predicting the next word following a given context. What if you did not understand the context - for example, if you did not know the language? In the absence of such knowledge, the optimal prediction would be the language's most frequent word. In fact, optimally one would predict each word according to its (unigram) frequency.1 This is precisely
∗Work done during internship at DeepMind.
1Notably, void of contextual clues, models of human language processing (Morton, 1969) would default to similar strategies. A word's frequency also influences its age of acquisition (Gilhooly and Logie, 1980; Morrison et al., 1997),
and the time taken to produce it in speech (Gerhand and Barry, 1998; Zevin and Seidenberg, 2002).
Figure 1: Average per-token divergence of the model
![0_image_0.png](0_image_0.png)
from unigram, uniform, and empirical distributions of respective training set as a function of training step
(log-scale). Early in training, the model output closely matches the unigram distribution for all contexts.
the strategy that neural language models have been empirically observed to employ during early training stages (Chang and Bergen, 2022) - before they have learnt a language's syntax or semantics.
Although this strategy of predicting the unigram distribution emerges early in training, it still takes the model hundreds (or even thousands) of parameter updates to learn it from a cold start
(see Fig. 1 or Chang and Bergen, 2022, Fig. 5). Yet a straightforward factorisation of a language model's final linear layer shows that we can in fact encode this frequency-related knowledge *prior to* any optimisation, 2 with the goal of bypassing this early stage of learning: Concretely, this is done by setting the bias term in a model's final linear layer to the log-unigram distribution of the training data.
Mathematically, this setup can be loosely interpreted as a modular "product of experts" (Hinton, 2002), where the bias term represents a simple unconditional distribution over the vocabulary, thus allowing the input-dependent logits to specialise in capturing contextual information. Indeed, we argue that a more modular design that disentangles word-frequency effects from contextual information may be desirable, given the recently-observed negative effects of word frequency statistics on models' generalisation abilities (Wei et al., 2021; 2The unigram distribution of the training data is known before optimisation, as it is often computed when building vocabularies or tokenising; hence this approach should come at no extra cost.
Puccetti et al., 2022; Rajaee and Pilehvar, 2022).
While this initialisation approach has been historically used in language models (Mnih and Hinton, 2007; Botha and Blunsom, 2014; Fang et al., 2015, *inter alia*), it has not seen widespread adoption within our current language generation architectures - an observation we attribute to uncertainty around whether the bias term automatically specialises to capture frequency without explicit encouragement to do so. We first observe that this is not the case - in fact, the final-layer bias term rarely changes from its *random* initialisation (see App. A.6), suggesting frequency is encoded elsewhere in the model parameters. We then empirically explore the impact of this initialisation on various aspects of model behaviour - within the context of current Transformer models for machine translation - including overall performance, learning efficiency, and the relationship between model-assigned probability and word frequency.
We find this initialisation indeed leads to increased training efficiency: models achieve higher BLEU
scores earlier on in training. More surprisingly, it also leads to improved *overall performance*. We discuss several potential reasons for these results, including changes to training dynamics and a mitigation of overfitting to surface statistics.
## 2 Probabilistic Language Generators 2.1 Preliminaries
We consider neural probabilistic models pθ for language generation. While there are a variety of architectural choices that can be made, most are autoregressive and follow a localnormalisation scheme. Explicitly, given prior context y<tdef = ⟨y0*, . . . , y*t−1⟩, these models output a probability distribution pθ(· | y<t) over the next token y ∈ V
def = *V ∪ {*EOS}, where V is the model's predefined vocabulary and EOS is a special end-of-sequence token. To ensure that pθ provides a valid probability distribution, the output of the model is projected onto the probability simplex ∆*|V|−*1 using a softmax transformation after a (learnt) linear projection layer:3
$$p_{\theta}(y\,|\,\mathbf{y}_{<t})={\mathrm{softmax}}\,(\mathbf{W}\,\phi(\mathbf{y}_{<t})+\mathbf{b})_{y}\tag{1}$$ $$\stackrel{\mathrm{def}}{=}\frac{e^{\mathbf{W}_{y}\,\phi(\mathbf{y}_{<t})+\mathbf{b}_{y}}}{\sum_{y^{\prime}\in\overline{\mathcal{V}}}e^{\mathbf{W}_{y^{\prime}}\,\phi(\mathbf{y}_{<t})+\mathbf{b}_{y^{\prime}}}}\tag{2}$$
where W ∈ R*|V|×*d denotes a weight matrix, b ∈
R|V| a bias vector, and ϕ : V∗ → R
dthe model's d-dimensional encoding for a given context.4 A number of prior studies have investigated whether - and if so, at what stage during the learning process - NLP models learn various linguistic phenomena (Alain and Bengio, 2017; Adi et al.,
2017, *inter alia*). Among the key findings are that language models reflect the statistical tendencies exhibited by their respective training corpora
(Takahashi and Tanaka-Ishii, 2017, 2019; Meister and Cotterell, 2021); some of which are learnt early on in training (Liu et al., 2021). For example, Chang and Bergen (2022) observe that, after only
∼ 1000 training updates, language models' outputs are approximately equal to the unigram distribution, regardless of the context that they condition on. We similarly observe this for machine translation models (see Fig. 1).
## 2.2 A Natural Bias
These learning trends motivate trying to supply language generation models with a natural starting point: the unigram distribution. Fortunately, this form of prior knowledge can be modularly encoded in standard neural models using the bias term of the final, pre-softmax linear layer. Consider the standard operation for projecting the output of the model onto the probability simplex.
Upon closer inspection, we see that eq. (2) has an interpretation as the product of two probability distributions, up to a normalisation constant:
$$p_{\theta}(\cdot|\,{\bf y}\!<\!t)\propto e^{{\bf W}\,\phi({\bf y}\!<\!t)}\cdot e^{{\bf b}}\tag{3}$$ $$\propto p_{{\bf w}_{\phi}}(\cdot|\,{\bf y}\!<\!t)\cdot p_{b}(\cdot)\tag{4}$$
i.e., one described by pWϕ(· | y<t) - which is *contextual* as it depends on the input y<t - and a separate, *non-contextual* term denoted by pb(·). Thus, we can qualitatively view this setup as factorising the model's prediction into these two components.5In this light, it makes intuitive sense that pb should be the unigram distribution - a distribution which optimally predicts (w.r.t. negative loglikelihood loss) the next-token when there is no contextual information to condition on. Note that such a setup - where a probability distribution is 4We index vectors and matrices using y, assuming an isomorphic mapping between y ∈ V and integers [1, . . . , |V|].
5Given this decomposition, one might expect that models learn to use the bias term to encode frequency on their own.
Yet we do not find this to be the case empirically (App. A.6).
modelled using a product of several simpler distributions, each of which can specialise on modelling one aspect of the problem - is referred to as a product of experts (Hinton, 2002).6
## 3 Related Work
As previously mentioned, prior work has likewise taken advantage of the interpretation of the bias term as a frequency offset when initialising model parameters (Mnih and Hinton, 2007; Botha and Blunsom, 2014; Fang et al., 2015, *inter alia*). Yet such techniques have fallen to the wayside for a number of years now, as other more prominent determinants of model performance and training efficiency have dominated the community's attention.
We revisit this initialisation strategy in the context of today's neural language models.
The practice of directly incorporating unigram probabilities into next-word predictions can be likened to the back-off methods proposed in the n-gram literature (Kneser and Ney, 1995; Chen and Goodman, 1999).7Indeed, there is an entire class of methods built around learning deviations from some base reference distribution, some of which have been employed specifically for language modelling (Berger and Printz, 1998; Teh, 2006; Grave et al., 2017). More recently, Li et al. (2022) cast neural language modelling as the learning of the residuals not captured by n-gram models and Baziotis et al. (2020) use language models as a reference distribution for training low resource machine translation models.
Another class of prior work has similarly explored efficient strategies for model weight initialisation (Glorot and Bengio, 2010; Le et al., 2015; Mu et al., 2018, *inter alia*), including random variable choices and re-initialisation criterion. In a similar vein, Ben Zaken et al. (2022) investigate the usefulness of the bias term, albeit for efficient 6The comparison of mixtures and products of experts is well summarised by the phrase: a single expert in a mixture has the power to pass a bill while a single expert in a product has the power to veto it. Each paradigm has its advantages.
Here, we argue that the latter is more suitable for language modelling, as the mixture formulation presents the issue that high-frequency tokens will be strongly "up-voted" by the expert corresponding to the unigram distribution. As these models already have a propensity to select high frequency tokens, even in improper contexts (Wei et al., 2021), this is arguably an undesirable trait.
7How to properly estimate the unigram distribution itself is an important, but often overlooked, question. In our work, we consider a predefined and finite vocabulary V; and estimate probabilities using their frequency in a training corpus.
For a longer discussion on this see Nikkarinen et al. (2021).
![2_image_0.png](2_image_0.png)
fine-tuning techniques. They show that, often, modifying solely the bias parameters during finetuning provides comparable performance to updating the entire model. Both our results thus showcase the usefulness of this simple set of parameters for natural language processing tasks.
Other works have also embraced frameworks akin to product or mixture of experts in language modelling or generation tasks. For example Neubig and Dyer (2016) combine neural and countbased language models in a mixture of experts paradigm; Artetxe et al. (2022) take advantage of the mixture of experts structure to propose a compute-efficient language modelling architecture. In contrast, we suggest a simple initialisation method that does not require training additional models or major changes to model architectures.
## 4 Experiments
We explore the effects of the unigram bias initialisation strategy on neural machine translation systems in comparison to a standard initialisation technique: initialising the bias to all 0s (denoted as
→
0 ) or omitting the bias term entirely.
## 4.1 Setup
We perform experiments with several language pairs: WMT'14 German-to-English (De→En; Bojar et al., 2014), IWSLT'14 German-toEnglish (De↔En; Cettolo et al., 2012), and Afrikaans/Rundi-to-English in the AfroMT
dataset (Run↔En and Af↔En; Reid et al., 2021).
These corpora span several language families and different sizes to demonstrate performance in higher, medium and lower resource domains
(∼ 4.5M, ∼ 750K and ∼ 150K sentence pairs, respectively). All models use the standard Transformer encoder–decoder architecture (Vaswani et al., 2017), with 6 layers in both. The IWSLT
and AfroMT Run↔En models have 4 attention heads per layer (adjusted for the smaller size of these datasets) while all other models have 8 attention heads. Dropout is set to 0.1; the feedforward hidden dimension is set to 512 for the WMT
model and 256 for all other models. Parameter estimation is performed using stochastic gradientdescent techniques, with the standard maximum likelihood objective and label smoothing (Szegedy et al., 2015) with hyperparameter α = 0.1. We use the Adam optimizer (Kingma and Ba, 2015)
with (β1, β2) = (0.9, 0.997). Early stopping was performed during training, i.e., model parameters were taken from the checkpoint with the best validation set BLEU (Papineni et al., 2002).
We preprocess the data using subword tokenisation with the SentencePiece library (Kudo and Richardson, 2018)
8 For initialisation, unigram frequencies are computed on respective training sets after tokenisation is performed. We do not hold bias term parameters fixed during training, although we found that they do not change perceptibly from their values at initialisation, even for the
→
0 -initialised model (App. A.6). The projection matrix W in the final linear layer is initialised element-wise using N (0, 1/
√d), where d is the embedding hidden dimension;the matrix is then scaled such that the matrix ℓ2 norm is approximately equal in magnitude to the bias ℓ2 norm.
Decoding is done with length-normalised beam search with a beam size of 5, which was similarly chosen based on validation BLEU scores. All BLEU and chrF (Popovic´, 2015) scores are computed using the sacreBLEU library (Post, 2018).
## 4.2 Results
We present main results here, and defer additional experimental results that exhibit similar trends
(e.g., using chrF as the evaluation metric, or training on WMT) to App. A. We also explore several extensions that build on the unigram initialisation, albeit with mixed results; again, see App. A.
Performance. Fig. 2 presents mean test BLEU
scores with standard error estimates from 5 different random seeds per dataset–intitialisation strategy combination. On 5 out of the 6 datasets, the
![3_image_0.png](3_image_0.png)
unigram bias initialisation technique leads to comparable or better test set performance in comparison to standard bias term initialisation techniques.
Efficiency. In order to quantify training efficiency, we estimate9the area under the validation BLEU learning curve (ALC) (Guyon et al., 2011; Liu et al., 2020) for the first 20k training updates;10 for the sake of interpretability, scores are renormalised by the interval span. From Fig. 3, we see that, on 5 out of the 6 datasets, higher BLEU
is achieved earlier on in training. Hence, the unigram bias initialisation approach appears to reach better performance in fewer iterations than standard initialisation approaches, which would be beneficial in cases where training efficiency considerations are paramount (e.g., in low-resource languages or in compute-limited settings).
Analysis. The aim of this analysis is to investigate whether - and to what extent - the finallayer bias unigram initialisation leaves the contextual part of the network, pWϕ(·|y<t), to better capture *non-frequency* effects. To this end, we examine model-assigned log-probability as a function of token frequency. In Fig. 4, we plot a token's unigram log-frequency against the average logprobability assigned to it (when it is the groundtruth token) by a model initialised with (left) a bias term of
→
0 and (right) a log-unigram bias term, binning them in equal-length intervals and averaging them for clarity. In Fig. 4a, the full model parameters are used. In Fig. 4b, the bias terms are not
![4_image_0.png](4_image_0.png)
added in the linear projection, i.e., only the contextual part of eq. (4), pWϕ(·|y<t), is computed.
The upward trend in average model-assigned log-probability in Fig. 4a suggests that, in general, models are better (or at least more confident) when predicting more frequent tokens. This trend holds when the bias term is omitted from the final linear computation of the
→
0 -initialised model. Interestingly though, when the bias term is omitted from the unigram-initialised model, the trend appears to reverse. This change suggests that for unigraminitialised models, frequency may instead be encoded in the bias term, providing evidence that for these models, pWϕ(· | y<t) may indeed specialise in non-frequency aspects of language.
## 5 Discussion
NLP models have been observed to overfit to surface cues in their training data, impeding their ability to generalise at inference time (Warstadt et al., 2020; Wei et al., 2021). Thus, one could argue that learning or encoding the superficial statistical tendencies of language is not necessarily a good thing. Yet, empirical results suggest that it may in fact be an important part of model learning dynamics (see App. A.4, for example). Indeed, Takahashi and Tanaka-Ishii (2019) find evidence that more powerful language models have a natural bias for learning them. Here we ask if - when initialising model parameters - we can *explicitly* endow our models with prior knowledge about one such statistical tendency: frequency.
While the result that this initialisation strategy improves training efficiency is perhaps not surprising, the relatively consistent improvement in overall performance is. We offer two possible explanations for this improvement. The first is that this initialisation beneficially alters model learning dynamics at the beginning of training, especially as early learning dynamics can have an outsized impact on final model performance (Achille et al.,
2019). A second possible explanation is that it disentangles frequency in the modelling of contextual probabilities. If pb (eq. (4)) explicitly models the unigram distribution, then our model does not need to capture this component of the conditional distribution in its other parameters, which frees up model capacity to focus on more complex phenomena within natural language. Its success thus motivates exploring the use of higher-order statistical models, such as a bigram or trigram model, in an attempt to further disentangle surfaces statistics from more nuanced components of natural language in a modular fashion.
## 6 Conclusion And Future Work
In this work, we revisit a simple initialisation technique in the context of modern neural language generation models: setting the bias term in the final linear projection layer to the log-unigram distribution of (sub)words within the training corpus.
This strategy leads to more efficient training; perhaps more surprisingly, it also leads to better overall performance in our machine translation experiments. We offer analysis and discussion as to the cause of these trends. An interesting direction for future work could be determining the effects that this initialisation procedure has on various model properties, e.g., its embedding space, and its benefits specifically in low-resource settings. Furthermore, extensions of this work could explore potential uses of this strategy in the mitigation of problems with lexically infrequent words, e.g., by analysing via the decomposition in eq. (4) whether a model's probability estimate for a word is being driven by frequency or contextual components. Finally, this technique is not limited to models of distributions over strings; it is in fact applicable to any neural classification setting, the exploration of which is left to future work.
## 7 Acknowledgements
We would like to thank the members of the DeepMind Language Team for insightful discussions during the course of this work and specifically, Chris Dyer and Kris Cao for helpful feedback on the initial version of this paper and John Hale for pointers to references on human language acquisition. We would also like to thank Clément Guerner for detailed feedback on clarity and presentation.
## 8 Limitations
Perhaps the main limitation of this work is that we only explore the approach within the context of machine translation benchmarks, although we conduct extensive experiments within this task that cover different training data scales and diverse pairs of languages, including low-resource ones. Nevertheless, we remark that the proposed approach is entirely general-purpose, and can be applied to any other language generation or even any neural classification tasks. We leave it to future work to investigate whether the same gains would apply in those settings. Furthermore, we have not yet explored how this technique would interact with other modelling choices, such as different optimizers, training objectives, or subword tokenisation algorithms. Lastly, our unigram initialisation of the bias term is currently done at the level of subword units, which do not always correspond to lexically or morphologically meaningful linguistic units. We leave the extension of this approach to more meaningful linguistic units, such as words or morphemes, to future work.
## 9 Ethical Considerations
We foresee no ethical issues that could arise from the findings presented in this work.
## References
Alessandro Achille, Matteo Rovere, and Stefano Soatto. 2019. Critical learning periods in deep networks. In *7th International Conference on Learning* Representations.
Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In *5th International Conference on* Learning Representations.
Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier
probes. In *5th International Conference on Learning Representations*.
Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giridharan Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeffrey Wang, Luke Zettlemoyer, Mona Diab, Zornitsa Kozareva, and Veselin Stoyanov. 2022. Efficient large scale language modeling with mixtures of experts. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11699–11732, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Christos Baziotis, Barry Haddow, and Alexandra Birch. 2020. Language model prior for low-resource neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 7622–7634, Online.
Association for Computational Linguistics.
Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel.
2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 1–9, Dublin, Ireland. Association for Computational Linguistics.
Adam Berger and Harry Printz. 1998. A comparison of criteria for maximum entropy/ minimum divergence feature selection. In *Proceedings of the Third Conference on Empirical Methods for Natural Language* Processing, pages 96–106, Palacio de Exposiciones y Congresos, Granada, Spain. Association for Computational Linguistics.
Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, Maryland, USA. Association for Computational Linguistics.
Jan Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1899–1907, Bejing, China. PMLR.
Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Annual conference of the European Association for Machine Translation, pages 261–268, Trento, Italy. European Association for Machine Translation.
Tyler A. Chang and Benjamin K. Bergen. 2022. Word acquisition in neural language models. *Transactions of the Association for Computational Linguistics*, 10:1–16.
Stanley F. Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. *Computer Speech & Language*,
13(4):359–394.
Hao Fang, Mari Ostendorf, Peter Baumann, and Janet Pierrehumbert. 2015. Exponential language modeling using morphological features and multitask learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(12):2410–
2421.
Simon Gerhand and Christopher Barry. 1998. Word frequency effects in oral reading are not merely ageof-acquisition effects in disguise. *Journal of Experimental Psychology: Learning, Memory and Cognition*, 24(4):267–83.
Ken J Gilhooly and Robert H Logie. 1980. Age-ofacquisition, imagery, concreteness, familiarity, and ambiguity measures for 1,944 words. *Behavior Research Methods & Instrumentation*, 12(4):395–427.
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In *Proceedings of the Thirteenth* International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249–256, Chia Laguna Resort, Sardinia, Italy. PMLR.
Edouard Grave, Armand Joulin, and Nicolas Usunier.
2017. Improving neural language models with a continuous cache. In 5th International Conference on Learning Representations.
Isabelle Guyon, Gavin C. Cawley, Gideon Dror, and Vincent Lemaire. 2011. Results of the active learning challenge. In *Active Learning and Experimental Design workshop In conjunction with AISTATS*
2010, volume 16 of *Proceedings of Machine Learning Research*, pages 19–45, Sardinia, Italy. PMLR.
Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. *Neural Computation*, 14(8):1771–1800.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *3rd International Conference on Learning Representations*.
R. Kneser and H. Ney. 1995. Improved backing-off for M-gram language modeling. In 1995 International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 181–184 vol.1.
Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System
Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics.
Quoc V. Le, Navdeep Jaitly, and Geoffrey E. Hinton.
2015. A simple way to initialize recurrent networks of rectified linear units. *CoRR*, abs/1504.00941.
Huayang Li, Deng Cai, Jin Xu, and Taro Watanabe.
2022. n-gram is back: Residual learning of neural text generation with n-gram language model.
In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Zeyu Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, and Noah A. Smith. 2021. Probing across time: What does RoBERTa know and when?
In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 820–842, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zhengying Liu, Zhen Xu, Shangeth Rajaa, Meysam Madadi, Julio C. S. Jacques Junior, Sergio Escalera, Adrien Pavao, Sebastien Treguer, Wei-Wei Tu, and Isabelle Guyon. 2020. Towards automated deep learning: Analysis of the AutoDL challenge series 2019. In *Proceedings of the NeurIPS 2019 Competition and Demonstration Track*, volume 123 of Proceedings of Machine Learning Research, pages 242–252. PMLR.
Clara Meister and Ryan Cotterell. 2021. Language model evaluation beyond perplexity. In *Proceedings of the 59th Annual Meeting of the Association* for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5328–5339, Online. Association for Computational Linguistics.
Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In *Proceedings of the 24th International Conference* on Machine Learning, ICML '07, page 641–648, New York, NY, USA. Association for Computing Machinery.
Catriona M. Morrison, Tameron D. Chappell, and Andrew W. Ellis. 1997. Age of acquisition norms for a large set of object names and their relation to adult estimates and other variables. *The Quarterly Journal of Experimental Psychology Section A*,
50(3):528–559.
John Morton. 1969. Interaction of information in word recognition. *Psychological Review*, 76(2):165.
Norman Mu, Zhewei Yao, Amir Gholami, Kurt Keutzer, and Michael W. Mahoney. 2018. Parameter re-initialization through cyclical batch size schedules. *CoRR*, abs/1812.01216.
Graham Neubig and Chris Dyer. 2016. Generalizing and hybridizing count-based and neural language models. In *Proceedings of the 2016 Conference on*
Empirical Methods in Natural Language Processing, pages 1163–1172, Austin, Texas. Association for Computational Linguistics.
Irene Nikkarinen, Tiago Pimentel, Damián Blasi, and Ryan Cotterell. 2021. Modeling the unigram distribution. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3721–
3729. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In *Proceedings of* the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Maja Popovic. 2015. ´ chrF: Character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186–
191, Belgium, Brussels. Association for Computational Linguistics.
Giovanni Puccetti, Anna Rogers, Aleksandr Drozd, and Felice Dell'Orletta. 2022. Outlier dimensions that disrupt transformers are driven by frequency.
In *Findings of the Association for Computational* Linguistics: EMNLP 2022, pages 1286–1304, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *CoRR*, abs/1910.10683.
Sara Rajaee and Mohammad Taher Pilehvar. 2022.
An isotropy analysis in the multilingual BERT embedding space. In *Findings of the Association for* Computational Linguistics: ACL 2022, pages 1309–
1316, Dublin, Ireland. Association for Computational Linguistics.
Machel Reid, Junjie Hu, Graham Neubig, and Yutaka Matsuo. 2021. AfroMT: Pretraining strategies and reproducible benchmarks for translation of 8 african languages. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2015. Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826.
Shuntaro Takahashi and Kumiko Tanaka-Ishii. 2017.
Do neural nets learn statistical laws behind natural language? *PLOS ONE*, 12(12):1–17.
Shuntaro Takahashi and Kumiko Tanaka-Ishii. 2019.
Evaluating computational language models with scaling properties of natural language. *Transactions* of the Association for Computational Linguistics, 45(3):481–513.
Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 985–992, Sydney, Australia. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30*, pages 5998–6008.
Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, and Samuel R. Bowman. 2020. Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually). In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 217–235, Online. Association for Computational Linguistics.
Jason Wei, Dan Garrette, Tal Linzen, and Ellie Pavlick.
2021. Frequency effects on syntactic rule learning in transformers. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 932–948, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jason D. Zevin and Mark S. Seidenberg. 2002. Age of acquisition effects in word reading and other tasks. Journal of Memory and Language, 47(1):1–29.
## Additional Experiments A
![8_image_0.png](8_image_0.png)
![8_image_2.png](8_image_2.png)
## Wmt Experiments A.2 A.3 Chrf Scores
![8_image_4.png](8_image_4.png)
A.1
## Additional Training Trends
![8_image_1.png](8_image_1.png)
![8_image_3.png](8_image_3.png)
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
## A.5 Out-Of-Domain Performance
![9_Image_2.Png](9_Image_2.Png) A.4 Regularising Away From The Unigram Distribution
Prior work has suggested that models' learning of surface statistics, such as the unigram distribution, may harm their generalisation abilities
(Warstadt et al., 2020; Wei et al., 2021). Under this premise, it seems feasible that the learning trends observed in Fig. 1 could have downstream negative side-effects, e.g., the inappropriate preference for higher frequency words observed in (Wei et al.,
2021). Given the importance of early stage training dynamics (Achille et al., 2019), it may even be the root cause of such behaviour. In the effort to test this hypothesis, we try to regularise a model's output *away* from the unigram distribution in early stages of training. Specifically, we instead minimise the objective KL(p || pθ)−λ KL(ω(p) || pθ)
for empirical distribution p and the unigram distribution of this empirical distribution ω(p). λ is a hyperparameter. We use this objective for the initial steps of training, then switching back to the standard objective KL(p || pθ). In Fig. 11, we observe that this form of regularisation leads to worse (or equivalently performing) models by the time of convergence. Results were similar when evaluated on out-of-distribution data.
## A.6 Change In Bias Term Over Training
In Figs. 13 and 14, we see the divergence of the bias term from the unigram distribution and the magnitude of the bias term, respectively. Interestingly, we see that neither value changes perceptibly from the time of initialisation onward, suggesting the bias term itself does not change much from its initialised value. This trend is consistent across seeds and datasets.
![10_image_0.png](10_image_0.png)
![10_image_3.png](10_image_3.png)
![10_image_1.png](10_image_1.png)
![10_image_2.png](10_image_2.png)
## A.7 Initialisation With Bias Term From Large-Scale Dataset
We additionally explore the effects of initialising the bias term with the log-unigram distribution, as estimated from a larger dataset in a more general purpose domain. We hypothesise that this strategy could be useful in low resource settings. We find that this indeed improves the generalisation performance of a model trained on IWSLT when evaluated on an OOD dataset (see Fig. 16).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
✗ A2. Did you discuss any potential risks of your work?
We do not foresee any potential risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract, section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Model parameters are provided in appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
nandi-etal-2023-simple | Simple Augmentations of Logical Rules for Neuro-Symbolic Knowledge Graph Completion | https://aclanthology.org/2023.acl-short.23 | High-quality and high-coverage rule sets are imperative to the success of Neuro-Symbolic Knowledge Graph Completion (NS-KGC) models, because they form the basis of all symbolic inferences. Recent literature builds neural models for generating rule sets, however, preliminary experiments show that they struggle with maintaining high coverage. In this work, we suggest three simple augmentations to existing rule sets: (1) transforming rules to their abductive forms, (2) generating equivalent rules that use inverse forms of constituent relations and (3) random walks that propose new rules. Finally, we prune potentially low quality rules. Experiments over four datasets and five ruleset-baseline settings suggest that these simple augmentations consistently improve results, and obtain up to 7.1 pt MRR and 8.5 pt Hits@1 gains over using rules without augmentations. | # Simple Augmentations Of Logical Rules For Neuro-Symbolic Knowledge Graph Completion
Ananjan Nandi Navdeep Kaur Parag Singla Mausam Indian Institute of Technology, Delhi
{tgk.ananjan, navdeepkjohal}@gmail.com {parags, mausam}@cse.iitd.ac.in
## Abstract
High-quality and high-coverage rule sets are imperative to the success of Neuro-Symbolic Knowledge Graph Completion (NS-KGC)
models, because they form the basis of all symbolic inferences. Recent literature builds neural models for generating rule sets, however, preliminary experiments show that they struggle with maintaining high coverage. In this work, we suggest three simple augmentations to existing rule sets: (1) transforming rules to their abductive forms, (2) generating equivalent rules that use inverse forms of constituent relations and (3) random walks that propose new rules. Finally, we prune potentially low quality rules. Experiments over four datasets and five ruleset-baseline settings suggest that these simple augmentations consistently improve results, and obtain up to 7.1 pt MRR and 8.5 pt Hits@1 gains over using rules without augmentations.
## 1 Introduction
Knowledge Graphs (KGs) comprise important world knowledge facts, but are typically incomplete, due to their ever-increasing size. KG embeddings (Wang et al., 2017) has been the dominant methodology for knowledge graph completion
(KGC). A KG embedding approach represents entities and relations as learnable dense vectors and computes a score for an unseen fact as a function over them. These generally have state-of-the-art performance, especially for large KGs.
Recently, neuro-symbolic (NS-KGC) approaches for the task have been proposed, where KG embeddings are enhanced by inferences over an explicit first-order logic rule set (Zhang et al.,
2020; Qu et al., 2021). The resulting models bring together best of both worlds - generalizability and interpretability of explicit logical rules, and the scalability and representation power of embeddings. Unfortunately, a key roadblock for success of NS-KGC is the availability of a high-coverage rule set.
Early NS-KGC methods, such as NeuralLP (Yang et al., 2017) and DRUM (Sadeghian et al., 2019), learn rules as part of a single model, but do not have performance competitive with embedding models such as RotatE (Sun et al.,
2019). A recent NS-KGC model, RNNLogic (Qu et al., 2021), matches empirical performance with embedding approaches. It has a separate neural component that outputs a set of rules, which is then used to train inference parameters, in an EM-based approach. Preliminary experiments on RNNLogic suggest that its ruleset has limited coverage, due to which symbolic inferences do not fire for many queries, and the model gets limited to using its embedding part only. The goal of this work is to strengthen the symbolic inferences in NS-KGC
models for better overall performance.
In this work, we propose simple augmentations that take an existing ruleset (such as one output by RNNLogic) and proposes additional (related)
rules to improve coverage and quality. We propose three augmentations. First, we convert each deductive rule into its abductive counterparts. Second, we supplement each rule via an equivalent rule that uses inverses for all constituent relations.
Third, we generate additional high-quality rules independently by local random walks and subsequent PCA filtering (Galárraga et al., 2013). These increase size of ruleset drastically; we balance runtimes by additionally pruning rules from existing set using our filtering approach. Overall, this results in a comparable number of high-quality and high-coverage rules, for use in NS-KGC.
On four KGC datasets, over three NS-KGC models, we find that our augmentations consistently improve KGC performance, outperforming no augmentation baselines by up to 7.1 MRR and 8.5 Hits@1 pts. We believe our augmentations should become standard practice over any ruleset for NSKGC. We release our code 1and rulesets.
1https://github.com/dair-iitd/NS-KGC-AUG
## 2 Background And Related Work
We are given an incomplete KG K = (E, R, T )
consisting of entities E, relation set R and set T =
{(h, r, t)} of triples. Our goal is to predict the validity of any triple not present in T .
Related Work: Existing work on NS-KGC can roughly be characterized into four types. One approach is to use attention over relations to learn end-to-end differentiable models (Yang et al., 2017; Sadeghian et al., 2019). A second approach, which includes Minerva (Das et al., 2018) and DeepPath (Xiong et al., 2017), uses RL to train an agent to find reasoning paths for KG completion. These approaches are not yet competitive to KG embedding models for large datasets. Thirdly, models like ExpressGNN (Zhang et al., 2020) and RNNLogic use variational inference to assess plausibility of a given triple. We experiment with both these models in this paper. The final type includes UNIKER (Cheng et al., 2021) and RUGE (Guo et al., 2018), which integrate embeddings alongside traditional rules learnt via ILP models. We believe that our augmented rules can benefit these works too. Since our experiments are based on RNNLogic, ExpressGNN and we utilize PCA scores for filtering, we describe these in some detail next.
RNNLogic+: As a pre-processing step, for every r ∈ R, RNNLogic adds a relation r−1to R, and corresponding facts using inverse relations to T .
RNNLogic first produces a set of first order rules
(L) using an LSTM which are used by the RNNLogic+ predictor to compute the score of a given triple. Given a query (h, r, ?), the candidate answer o is scored by RNNLogic+ as:
scor(o) = MLPPNA({vl | \#(h, l, o)}l∈L)
(1)
where the learnable embedding vl of a given rule l ∈ L is weighted by the number of groundings
(\#) that triple (h, r, o) satisfies in the rule l's body.
The resulting weighted embeddings of all rules are aggregated by employing PNA aggregator (Corso et al., 2020) and this aggregated embedding is passed through an MLP to obtain a final score.
The authors designed another scoring function that incorporates RotatE (Sun et al., 2019) into the scoring function, **scor**(o), in equation (1) where the goal is to exploit the knowledge encoded in the KG embeddings. The resulting scoring function is:
scoreKGE(o) = scor(o)+η **RotatE** (h, r, o) (2)
where **RotatE** (h, r, o) is the score of the triple obtained from RotatE , and η is a hyper-parameter.
RotatE (h, r, o) is the negation of the value obtained by rotating the embedding for h by the rotation transformation defined by the embedding of r in complex space and computing the distance from the embedding of t. Please refer to Appendix B
for further details.
ExpressGNN: It is a novel model that integrates Markov Logic Networks (MLN) (Richardson and Domingos, 2006) and Graph Neural Networks (GNN) (Kipf and Welling, 2017) to exploit their complementary strengths. An open-world paradigm is adopted in which a fact that is unknown in KG is assumed to be hidden (not false). The joint distribution of the observed and hidden triples of the KG in the MLN is optimized by employing a variational EM framework where the variational posterior distribution of the hidden variables is encoded as a GNN. Please refer to (Zhang et al., 2020)
for further details about the model.
PCA Score: It is a symbolic rule confidence metric proposed in AMIE (2013) - see Appendix M for details. Broadly, it is the number of positive examples satisfied by a rule, divided by the total number of tails reached by the rule from heads occurring in the training dataset. Its performance in the context of AMIE was not as good due to its purely symbolic approach, and we are likely the first to show its utility in the context of NS-KGC.
## 3 **Rule Augmentation In Ns-Kgc Models**
With the aim of maximal utilization of a given rule l ∈ L, we first propose two rule augmentation techniques: abduction and rule inversion. The other two techniques prune low-quality rules from L, and independently add new rules to increase coverage.
All augmentations are generic and can be integrated with any existing ruleset, and NS-KGC model.
Abduction: The goal of abductive reasoning (or abduction) is to find the best explanation from a given set of observations (Pierce, 1935). It has seen limited use in the context of KBs (Yoshikawa et al., 2019). In our approach, for every rule in L,
we introduce several abductive rules with one of the antecedants, appearing as a consequent. As an example, consider the rule:
## R1(X, Y) ∧ R2(Y, Z) ∧ R3(Z, W) ⇒ Rh(X, W)
Our augmentation will generate abductive rules, one for each relation in the body, as:
R2(Y, Z) ∧ R3(Z, W) ∧ RH−1(W, X) ⇒ R1−1(Y, X) R3(Z, W) ∧ RH−1(W, X) ∧ R1(X, Y) ⇒ R2−1(Z, Y) RH−1(W, X) ∧ R1(X, Y) ∧ R2(Y, Z) ⇒ R3−1(W, Z)
As an example, let's say a learned rule is BornIn(X, U) ∧ PlaceInCountry(U, Y) ⇒**Natio**
nality(X, Y). If in the KG, we know that Oprah has nationality U.S., and that she is born in Mississippi, then abduction allows the model to hypothesize that Mississippi might be in U.S.
Of course, not all abductions are accurate, for instance, just because Alabama is known to be in U.S., does not mean that Oprah was born in Alabama. Abductive rules increase rule coverage at the cost of precision. We expect the predictor scorer to automatically handle which (abductive)
rules can and cannot be trusted.
Rule Inversion: Our second rule augmentation takes an existing rule and rewrites it by referring to inverses of all relations.
As an example, if a rule uses the path Oprah **BornIn**
−−−−−→ Mississippi**PlaceInCountry**
−−−−−−−−−−→ US,
then it could also use the equivalent path US**PlaceInCountry**−1
−−−−−−−−−−−−→ Mississippi **BornIn**−1
−−−−−−→
Oprah. Formally, for every original rule:
R1(X, Y) ∧ R2(Y, Z) ∧ R3(Z, W) ⇒ RH(X, W)
we add to the ruleset the following inverted rule:
R3−1(W, Z) ∧ R2−1(Z, Y) ∧ R1−1(Y, X) ⇒ RH−1(W, X)
Rule Filtering: Augmentations increase the size of the ruleset. In order to reduce the number of parameters and the training/test times of the NSKGC model, we prune seemingly low-quality rules from the augmented rulebase. For this, we compute the PCA score for each original and augmented rule and prune all the rules that have score less than a threshold (set at 0.01 in experiments) and have less than 10 groundings. So, all low-coverage rules with seemingly low quality are pruned out.
As experiments show, this results in up to 70%
reduction in the number of rules, while preserving KGC performance.
Random Walk Augmentation: Motivated by the empirical success of PCA scores for finding good rules in the previous step, we further augment our ruleset with new, high scoring rules generated independently via local random walks. Starting at each entity in the KG, we perform a number of random walks of fixed length. Each such random walk constitutes the body of the rule and the relation connecting the end entities in the KG form the head of the discovered rule. We score these rules by the PCA score and retain all such rules that have PCA score above the threshold (of 0.1).
## 4 Experiments
Datasets: We use four datasets for evaluation: WN18RR (Dettmers et al., 2018), FB15K237 (Toutanova and Chen, 2015), Kinship and UMLS (Kok and Domingos, 2007). For each triple in test set, we answer queries (h, r, ?) and
(t, r−1, ?) with answers t and h. We report the Mean Reciprocal Rank (MRR) and Hit@k (H@1, H@10) under the filtered measures (Bordes et al., 2013). Details and data stats are in Appendix A.
Baselines: We first experiment with two base models: RNNLogic+ ([RNN] in tables), and **RNNLogic**+
with RotatE ([RNN+**RotE**]) (Eqn 2). We have reproduced the numbers published by the original authors for these models (details in Appendix D). We run these models with two rulesets: (1) **Orig**, rules generated by RNNLogic (around 300 rules per relation for WN18RR and FB15k-237, and 1000 rules per relation for Kinship and UMLS), and (2) RW,
only the rules discovered by our random walks.
This second setting can only evaluate the value of abduction, inversion, and pruning since random walks are anyways used in generating rules. More details in Appendix C, F and G.
In order to assess the generality of our augmentations, we also experiment with ExpressGNN
(Zhang et al., 2020). We choose top five rules for each relation from RNNLogic's **Orig** ruleset according to PCA confidence and provide them as input ruleset to ExpressGNN ([**ExpGNN**] in tables).
ExpressGNN does not scale up to the augmented ruleset for FB15K-237, hence we test it for the other three datasets. Refer to Appendix E for more details. We use AUG to denote the performance of rule augmentations for all baselines.
We also tried rulesets from NeuralLP (2017),
but they are too small to be useful with RNNLogic+. The only other NS-KGC model that has reported performance similar to RNNLogic+ is RLogic (2022). Unfortunately, their code is not publicly available.2 Results: We report the results in Table 1 for the RNNLogic baselines (further details in Appendix 2Our reimplementation could not match reported results, and sending several emails to original authors was not helpful.
Table 1: Results of reasoning on four datasets with RNNLogic+ (RNN). **Orig** represent RNNLogic rules. **RotE**
represents RotatE. AUG represents our proposed augmentations. RW denotes rules discovered by random walks.
Algorithm WN18RR FB15K-237 Kinship **UMLS**
MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10
[RNN]-(RW) 44.2 41.6 48.7 26.4 19.8 39.9 63.2 47.8 93.7 74.7 63.1 93.0
[RNN]-(RW+AUG) 47.7 44.3 54.3 29.5 21.5 45.3 65.7 50.9 94.8 79.7 69.5 95.7 [RNN+**RotE**]-(RW) 48.7 45.1 55.9 30.8 22.8 46.9 71.4 58.0 95.7 82.0 73.5 95.3
[RNN+RotE]-(RW+AUG) 51.1 47.4 58.5 31.4 23.3 47.9 71.9 58.9 96.2 83.8 75.8 96.4
[RNN]-(**Orig**) 49.6 45.5 57.4 32.9 24.0 50.6 61.6 46.3 91.8 81.4 71.2 95.7
[RNN]-(Orig+AUG) 52.7 48.3 61.3 34.5 25.7 51.9 68.7 54.8 95.7 84.0 75.2 96.4 [RNN+RotE]-(**Orig**) 51.6 47.4 60.2 34.3 25.6 52.4 68.9 54.9 94.6 81.5 71.2 96.0
[RNN+RotE]-(Orig+AUG) 55.0 51.0 63.5 35.3 26.5 52.9 72.9 59.9 96.4 84.2 76.1 **96.5**
H). We observe that in all settings, there is a notable increase in performance using augmented rules. In particular, we obtain 7.1 pt and 8.5 pt increase in MRR and Hits@1 in [RNN]-(**Orig**) setting on Kinship, and 3.5 pt and 5.6 pt increase in MRR and Hits@10 in [RNN]-(RW) setting for WN18RR dataset. We also find that rule augmentations complement RotatE scores in capturing more information about the KG, leading to improved performance in those settings too. To the best of our knowledge, our best results for WN18RR are state-of-the-art for NS-KGC models.
Next, we present the results of our proposed augmentations with ExpressGNN3as baseline in Table 2. We note that ExpressGNN assumes the knowledge of test queries while it constructs the MLN
during training. Therefore, the results presented in Table 2 are not directly comparable with the results of other models presented in the paper, which do not make this assumption. We observe substantial gains on all datasets and all metrics, notably a 22.4 pt MRR, 17.9 pt Hits@1 and 29.9 pt Hits@10 improvement on WN18RR dataset with our augmentations (AUG). This experiment demonstrates that AUG can help other neuro-symbolic settings as well. Refer to Appendix E for more details.
Table 2: Results of reasoning on three datasets with
ExpressGNN (ExpGNN). AUG represents our proposed
augmentations4.
Dataset Model **MRR H@1 H@10**
WN18RR [**ExpGNN**] 52.3 44.1 63.6
[ExpGNN+AUG] **74.7 62.0 93.5**
UMLS [**ExpGNN**] 58.1 44.4 77.6
[ExpGNN+AUG] **60.9 49.2 83.4**
Kinship [**ExpGNN**] 52.7 41.7 79.8
[ExpGNN+AUG] **64.1 49.5 93.2**
## 5 Analysis Of Augmented Rules
We perform five further analyses to answer the following questions. Q1. Are the rules created by abduction and rule inversion of high quality?
Q2. What is the individual effect of each type of augmentation on the performance? Q3. How do the rule augmentations affect the training time of a model? Q4. Can we get the same performance as augmentation by generating more rules from the LSTM in RNNLogic? Q5. Are the augmented rules interpretable by a human?
Quality of New Rules: To answer Q1, we employ two metrics to assess quality of rules, (PCA-metric and FOIL-metric) before and after abduction and rule inversion. The rules obtained from random walks have high scores by construction since they are filtered based on PCA score. Therefore, they are of high quality as per our definition. (Details in Appendix M and N)
Table 3: Number of high quality rules before and after augmentations on rules generated by RNNLogic.
Rule Set WN18RR **UMLS**
FOIL PCA FOIL PCA
Original 2286 2647 25079 28982 Original w/ INV 3157 3577 42188 46908 Original w/ ABD 7141 7607 68693 84554 Original w/ INV + ABD 8502 9155 **100146 125019**
Table 3 presents the number of rules that have a score of at least 0.1 according to each metric, which we regard as criteria for defining a high-quality rule. We observe that there is a large increase in the number of high-quality rules after abduction and rule inversion, nearly tripling in the case of abduction (row 1 vs row 3). This is because the augmented rules exploit the same groundings as the original rules, in the form of new rules. Thus, augmented counterparts of high-quality rules are likely to be high-quality. Overall, we find that abduction and rule inversion does indeed produce high-quality rules.
Ablation: To answer Q2, we perform an ablation Table 4: Ablation study on WN18RR and Kinship for filtering (FIL), inversion (INV), abduction (ABD) and PCA-filtered random walk augmentation (RW).
Algorithm WN18RR **Kinship**
MRR H@1 H@10 MRR H@1 H@10
![4_image_1.png](4_image_1.png)
Table 5: Table showing performance/time trade-off per
epoch on two datasets. T/T(min) represents training
time per epoch in minutes.
Dataset Modification #Rules T/T **MRR H@1 H@10**
WN18RR
Orig 6135 334 51.6 47.4 60.2
Orig + AUG 25729 1520 55.0 50.6 63.3
Orig + AUG + FIL 20053 931 55.0 51.0 **63.5**
Kinship
Orig 49994 5 68.9 54.9 94.6 Orig + AUG 315865 36 72.5 59.5 96.4 Orig + AUG + FIL 97331 11 72.9 59.9 **96.4**
study for inversion (INV), abduction (ABD), random walk augmentation (RW) and rule filtering (FIL)
on [RNN+RotE]-(**Orig**) setting for WN18RR and Kinship datasets to observe the impact of each type of augmentation. The results are presented in Table 4 (further details are in Appendix I).
In general, abduction (row 3) gives larger improvements than rule inversion (row 2) because as we noticed in the previous section, abduction adds a larger number of high-quality rules to the rule set.
We also find that adding the PCA-based random walk rules results in performance improvement, even with only 5% new rules being added (as in Kinship) as compared to original rule set. Finally, we find that filtering based on the PCA metric results in marginal performance improvement, along with lower running times (see below).
Performance vs Training Time Trade-off: To answer Q3, we report training time per epoch (in minutes), size of ruleset and performance metrics after augmentation through ABD, INV and RW (denoted as AUG) and filtering (AUG + FIL) with [RNN + **RotE**]
as the baseline model in Table 5.
Our proposed augmentations (INV, ABD and RW)
result in substantial performance gains, at the cost of 5-6 times increase in the training time. After filtering (FIL), there is no decrease in performance, and the training time goes down by 2-3× compared to AUG. Therefore, we obtain substantial performance gains through our augmentations, at the cost of only 2-3 times increase in training time.
Rule Generation vs Rule Augmentation: Our augmentations result in 100-200% increase in the number of rules across datasets after filtering. As a
| and total rules generated from RNNLogic respectively. Dataset R/R TR AUG MRR H@1 H@10 WN18RR 80 9867 Yes 49.0 44.9 56.7 500 11000 No 47.7 43.7 55.2 Kinship 80 18432 Yes 69.5 56.1 94.6 500 25000 No 66.1 52.1 93.1 |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
![4_image_0.png](4_image_0.png)
![4_image_2.png](4_image_2.png)
$\begin{array}{c}\text{50.6}\\ \hline\mathbf{510}\\ \hline\mathbf{54.9}\\ \mathbf{50.5}\end{array}$
![4_image_3.png](4_image_3.png)
$$\begin{array}{l}{{59.5}}\\ {{59.9}}\end{array}$$
control experiment to answer Q4, we train RNNLogic to generate 80 rules per relation (R/R) and augment resulting rules without filtering (for a fair comparison). We further train RNNLogic with 500 rules per relation without augmentation and compare performance of both rulesets (which now have comparable size) using [RNN+**RotE**] on WN18RR
and Kinship in Table 6 (see Appendix J).
We observe that rule augmentations lead to large improvement over rule generation in all cases, even when rule generation creates more rules. Thus, we find that rule augmentation is more beneficial than simply using more rules from the rule generator.
Augmentations exploit a small number of highquality rules to their full potential.
Qualitative Analysis: To answer Q5, we randomly sample 50 rules from the **Orig** and RW rules for the FB15K-237 dataset and score them as 0 (gibberish), 1 (logically dubious but statistically plausible)
and 2 (logically correct) for each ruleset. The reported numbers are averages of scores obtained from two human annotators. We do not include INV and ABD in this comparison as they are generated from **Orig** rules utilizing the same groundings and thus we expect them to be as interpretable. The scores are 0.90 (**Orig**) and 1.23 (RW). RW rules are more interpretable due to their high PCA scores.
One example of an interpretable rule added by RW is Friends(A, C), Inverse_**Producer**(C, D),
Writer(D , B) :- **Friends**(A, B). We provide additional rule examples for each type of augmentation in Appendix K.
## 6 Conclusion And Future Work
We present simple rule augmentation techniques in the context of Neuro-Symbolic Knowledge Graph models and obtain substantial increase in performance over strong base models. We believe our augmentations can become standard for all subsequent NS-KGC models. We release code and rulesets for further research. Future work includes using our augmentation technique during the iterative learning of rules in algorithms such as RNNLogic, potentially further improving their performance.
## Acknowledgements
This work is supported by grants by Google, IBM,
Verisk, and 1MG, and the Jai Gupta chair fellowship by IIT Delhi. We also acknowledge travel support from Google travel grant. We thank the IIT
Delhi HPC facility for its computational resources.
## Limitations
Since rule abduction and inversion utilize the same groundings as the original rules, Neuro-Symbolic KGC models that are based on grounding the entire rule will not benefit from these augmentations.
Abduction and inversion also require the model to be trained on a knowledge graph that contains the inverse relations r−1for each relation r. Finally, since RNNLogic+ has a separate rule embedding for each rule, performing rule augmentation increases the number of parameters in the model and leads to longer training times and larger GPU
memory consumption.
## Ethics Statement
We anticipate no substantial ethical issues arising due to our work on rule augmentation for NeuroSymbolic KGC. Our work relies on a set of rules generated from another source to perform augmentation. This may result in the augmented rule set exaggerating the effect of malicious or biased rules in the original rule set.
## Acknowledgements
This work is supported by IBM AI Horizons Network grant, grants by Google, Verisk, and 1MG, an IBM SUR award, and the Jai Gupta chair fellowship by IIT Delhi. We acknowledge travel support by Google and Yardi School of AI travel grants.
We thank the IIT Delhi HPC facility for its computational resources.
## References
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating Embeddings for Modeling Multirelational Data. In *NeurIPS*. Curran Associates, Inc.
Kewei Cheng, Jiahao Liu, Wei Wang, and Yizhou Sun.
2022. RLogic: Recursive Logical Rule Learning from Knowledge Graphs. In *Proceedings of the 28th* ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD '22, page 179–189, New York, NY, USA. Association for Computing Machinery.
Kewei Cheng, Ziqing Yang, Ming Zhang, and Yizhou Sun. 2021. UniKER: A Unified Framework for Combining Embedding and Definite Horn Rule Reasoning for Knowledge Graph Inference. In *EMNLP*, pages 9753–9771, Online and Punta Cana, Dominican Republic. ACL.
Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Velickovi ˇ c. 2020. ´ Principal Neighborhood Aggregator for Graph Nets. In NeuRIPS, pages 13260–13271.
Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, and Andrew McCallum. 2018. Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning. In *ICLR*.
Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2D
Knowledge Graph Embeddings. In *Proceedings of* the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press.
Luis Antonio Galárraga, Christina Teflioudi, Katja Hose, and Fabian Suchanek. 2013. AMIE: Association Rule Mining under Incomplete Evidence in Ontological Knowledge Bases. In *Proceedings of the 22nd* International Conference on World Wide Web, WWW
'13, page 413–422, New York, NY, USA. Association for Computing Machinery.
Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2018. Knowledge Graph Embedding with Iterative Guidance from Soft Rules. In *AAAI*, pages 4816–4823.
Thomas N. Kipf and Max Welling. 2017. SemiSupervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR '17.
Stanley Kok and Pedro Domingos. 2007. Statistical Predicate Invention. In *Proceedings of the 24th International Conference on Machine Learning*, ICML
'07, page 433–440, New York, NY, USA. Association for Computing Machinery.
C. S. Pierce. 1935. *The Collected Papers of Charles* Sanders Peirce. Harvard University Press, Harvard, US.
Meng Qu, Junkun Chen, Louis-Pascal A. C. Xhonneux, Yoshua Bengio, and Jian Tang. 2021. RNNLogic:
Learning Logic Rules for Reasoning on Knowledge Graphs. In *ICLR*, pages 1–21.
J. R. Quinlan. 1990. Learning Logical Definitions from Relations. *Machine Learning*, 5(3):239–266.
Matthew Richardson and Pedro Domingos. 2006.
Markov logic networks. *Machine Learning*,
62(1–2):107–136.
Ali Sadeghian, Mohammadreza Armandpour, Patrick Ding, and Daisy Zhe Wang. 2019. DRUM: EndTo-End Differentiable Rule Mining On Knowledge Graphs. In *NeuRIPS*, volume 32. Curran Associates, Inc.
Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In *ICLR*.
Kristina Toutanova and Danqi Chen. 2015. Observed versus Latent Features for Knowledge Base and Text Inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66, Beijing, China. Association for Computational Linguistics.
Quan Wang, Zhendong Mao, Bin Wang, and Li Guo.
2017. Knowledge Graph Embedding: A Survey of Approaches and Applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–
2743.
Wenhan Xiong, Thien Hoang, and William Yang Wang.
2017. DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning. In *EMNLP*, pages 564–573, Copenhagen, Denmark. ACL.
Fan Yang, Zhilin Yang, and William W Cohen. 2017.
Differentiable Learning of Logical Rules for Knowledge Base Reasoning. In *NeuRIPS*, volume 30. Curran Associates, Inc.
Masashi Yoshikawa, Koji Mineshima, Hiroshi Noji, and Daisuke Bekki. 2019. Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference. In The Thirty-Third AAAI
Conference on Artificial Intelligence, AAAI 2019, pages 7410–7417. AAAI Press.
Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, and Le Song. 2020. Efficient Probabilistic Logic Reasoning with Graph Neural Networks. In *ICLR*.
## B Rotate Rotate (H, R, T) = −D(Xh ◦ Xr, Xt) (3) A Data Statistics And Evaluation Metrics C Experimental Setup For Rnnlogic
as (t, r−1, ?) with the same answer h, where r−1 is the inverse relation for r. In order to train the model over the inverse relations, we augment the training data with an additional (t, r−1, h) triple for every triple (h, r, t) present in KG.
Given ranks for all queries, we report the Mean Reciprocal Rank (MRR) and Hit@k (H@k, k =
1, 10) under the filtered setting in the main paper and two additional metrics: Mean Rank (MR)
and Hits@3 in the appendices. MRR and Hits@k metrics are reported after multiplying with 100.
To maintain consistency with RNNLogic, in cases where the model assigns same probability to other entities along with the answer, we compute the rank as (m +
(n+1)
2) where m is the number of entities with higher probabilities than the correct answer and n is the number of entities with same probability as the answer.
RotatE is a knowledge graph embedding model that embeds entities and relations in complex space.
Relation embeddings are modeled as rotations in complex vector space. Formally, **RotatE**(h, r, t)
is calculated using the following equation:
where d is the cosine distance in complex vector space, RotatE embedding of r is xr, and ◦ is the Hadamard product. Intuitively, we rotate xh by the rotation defined by xr and consider the distance between the result and xt. For our experiments, **RotatE** is trained separately and the trained embeddings are used to calculate scores for the
[RNN + **RotE**] baseline.
Table 7 summarizes the statistics of the data used in the experiments of our work. We utilize the standard train, validation and test splits for WN18RR
and FB15k-237 datasets. Since there are no standard splits for UMLS and Kinship datasets, for consistency, we employ the splits used by RNNLogic (2021) for evaluation (created by randomly sampling 30% triplets for training, 20% for validation and the rest 50% for testing).
In order to obtain main results in Table 1, we train the rule generator in RNNLogic with optimal hyperparameters obtained after communication with the original authors and generate a set of high-quality Horn rules to use for training RNNLogic+. For our best results, we utilize optimal rules provided by the authors of RNNLogic5. We augment these rules by abduction (ABD), and then rule inversion (INV)
on both the original rules and the rules formed after abduction. We further augment the rulebase with the rules discovered by random walks (RW). Finally, we filter (FIL) superior rules from these rules by 5https://github.com/DeepGraphLearning/RNNLogic Metrics: For each triplet (h, r, t) in the test set, traditionally queries of the form (h, r, ?) and
(?, r, t) are created for evaluation, with answers t and h respectively. We model the (?, r, t) query
| Datasets | #Entities | #Relations | #Training | #Validation | #Test |
|------------|-------------|--------------|-------------|---------------|---------|
| FB15K-237 | 14541 | 237 | 272,115 | 17,535 | 20,446 |
| WN18RR | 40,943 | 11 | 86,835 | 3,034 | 3,134 |
| Kinship | 104 | 25 | 3,206 | 2,137 | 5,343 |
| UMLS | 135 | 46 | 1,959 | 1,306 | 3,264 |
Table 8: RNNLogic rules used per dataset. INV and ABD, RW represent rule inversion and abduction and PCA-based walk rule augmentation respectively. The last column represents the rule filtering (FIL) applied on all the rules.
| Datasets | #Rules | #Rules | #Rules | #Rules + | #Rules +INV | #Rules +INV+ |
|------------|----------|-----------|------------|----------------|---------------|----------------|
| + INV | + ABD | INV + ABD | + ABD + RW | ABD + RW + FIL | | |
| FB15K-237 | 126137 | 174658 | 295403 | 392280 | 394967 | 298446 |
| WN18RR | 6135 | 8742 | 18251 | 23304 | 25729 | 20053 |
| Kinship | 49994 | 91544 | 171302 | 301646 | 315865 | 97331 |
| UMLS | 91908 | 171526 | 322464 | 564374 | 574687 | 204504 |
Table 9: Results of reasoning on four datasets: WN18RR, FB15K-237, Kinship and UMLS with RNNLogic+ (RNN). **Orig** represents rules acquired from RNNLogic. **RotE** represents RotatE. AUG represents all the proposed approaches in our work. RW represents rules obtained only from PCA-filtered random walk augmentation.
Algorithm WN18RR **FB15K-237**
MR MRR H@1 H@3 H@10 MR MRR H@1 H@3 H@10
[RNN]-(RW) 8218.73 44.2 41.6 45.5 48.7 808.32 26.4 19.8 28.9 39.9
[RNN]-(RW+AUG) 7241.14 47.7 44.3 49.2 54.3 481.58 29.5 21.5 32.3 45.3
[RNN+**RotE**]-(RW) 4679.70 48.7 45.1 49.8 55.9 521.06 30.8 22.8 33.5 46.9
[RNN+RotE]-(RW+AUG) 4431.75 51.1 47.4 52.6 58.5 279.65 31.4 23.3 34.3 47.9
[RNN]-(**Orig**) 5857.65 49.6 45.5 51.4 57.4 256.14 32.9 24.0 36.1 50.6 [RNN]-(Orig+AUG) 5156.38 52.7 48.3 54.9 61.3 218.11 34.5 25.7 37.9 51.9
[RNN+RotE]-(**Orig**) 4445.79 51.6 47.4 53.4 60.2 217.30 34.3 25.6 37.5 52.4
[RNN+RotE]-(Orig+AUG) 4231.77 55.0 51.0 57.2 63.5 **198.81 35.3 26.5 38.7 52.9**
Algorithm Kinship **UMLS**
MR MRR H@1 H@3 H@10 MR MRR H@1 H@3 H@10
[RNN]-(RW) 3.6 63.2 47.8 73.5 93.7 5.17 74.7 63.1 83.6 93.0
[RNN]-(RW+AUG) 3.36 65.7 50.9 75.8 94.8 3.65 79.7 69.5 87.8 95.7
[RNN+**RotE**]-(RW) 2.99 71.4 58.0 81.6 95.7 3.46 82.0 73.5 88.9 95.3 [RNN+RotE]-(RW+AUG) 2.89 71.9 58.9 81.7 96.2 3.20 83.8 75.8 90.0 96.4
[RNN]-(**Orig**) 4.45 61.6 46.3 71.7 91.8 3.66 81.4 71.2 90.3 95.7
[RNN]-(Orig+AUG) 3.15 68.7 54.8 78.9 95.7 **2.81** 84.0 75.2 **91.5** 96.4
[RNN+RotE]-(**Orig**) 3.28 68.9 54.9 78.8 94.6 3.17 81.5 71.2 90.1 96.0 [RNN+RotE]-(Orig+AUG) **2.80 72.9 59.9 82.6 96.4** 2.83 **84.2 76.1** 91.3 **96.5**
Algorithm Kinship **UMLS**
MR MRR H@1 H@3 H@10 MR MRR H@1 H@3 H@10
AUG 2.80 72.9 59.9 82.6 96.4 **2.83 84.2 76.1 91.3 96.5**
AUG minus ABD 2.90 71.3 57.8 81.4 96.2 3.16 82.6 72.9 90.8 96.5 AUG minus INV 2.89 71.3 57.7 81.5 96.4 2.98 83.8 74.8 91.9 96.5 AUG minus FIL 2.84 72.5 59.5 82.3 96.4 3.01 83.9 75.1 91.5 96.5 AUG minus RW 2.99 70.7 57.1 80.8 95.6 3.05 82.8 73.2 91.1 96.5
Table 10: Ablation study performed on Kinship and UMLS for filtering (FIL), inversion (INV), abduction (ABD) and random walk augmentation (RW). AUG represents all proposed approaches in our work taken together.
Table 11: Ablation study performed on WN18RR for abduction (ABD), inversion (INV), filtering (FIL) and PCAbased random walk augmentation (RW). AUG represents represents all the approaches proposed in our work.
Algorithm **WN18RR**
MR MRR H@1 H@3 H@10
AUG 4231.77 **55.0 51.0 57.2 63.5**
AUG minus ABD 4406.95 52.2 47.8 54.1 61.0 AUG minus INV 4302.04 54.4 50.0 56.8 62.7
AUG minus FIL **4224.20 55.0** 50.6 57.1 63.3
AUG minus RW 4263.43 54.6 50.1 57.0 63.2
PCA score. We present statistics detailing the number of rules used per dataset after each augmentation step in Table 8. These rules are utilized in RNNogic+ ([RNN]-(**Orig**)) and RNNLogic+ with RotatE ([RNN+RotE]-(**Orig**)) baselines. For the other results: [RNN]-(RW) and [RNN+**RotE**]-(RW), we employ only the rules obtained by RW augmentation and train RNNLogic+ model with them (Appendix F). The goal of these set of results is to test the utility of abduction and rule inversion with a different set of rules. The details of training RNNLogic+
model is provided in Appendix G.
## D Rnnlogic Results Reproduction
We have reproduced the results of RNNLogic+
with and without RotatE and obtained similar results to the original RNNLogic paper (Qu et al.,
2021), however the numbers reported in this paper for [RNN] and [RNN + **RotE**] are our own reproductions. In this section, we report a comparison between the original results and our reproduced results for **RNNLogic**+ model ([RNN]) on the WN18RR and FB15K-237 datasets. As can be observed in the Table 12, our reproduced results are better than the published results of RNNLogic model for both the datasets, using hyperparameters obtained after communication with the authors of the RNNLogic paper.
Table 12: Comparison of the results reported in original
RNNLogic paper with the results reproduced by the
authors of this paper.
Dataset Numbers **MR MRR H@1 H@3 H@10**
WN18RR Reported 7204 48.9 45.3 50.6 56.3
\begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{**A**} & \multicolumn{2}{c}{**A**} & \multicolumn{2}{c}{**A**} & \multicolumn{2}{c}{**A**} \\ \hline 7204 & 48.9 & 45.3 & 50.6 & 56.3 \\ 5858 & 49.6 & **45.5 & 51.4 & 57.4 \\ \hline 480 & 29.9 & 21.5 & 32.8 & 46.4 \\ 256 & 32.9 & 24.0 & 36.1 & 50.6 \\ \hline \hline \end{tabular}
## Fb15K-237 Reported 480 29.9 21.5 32.8 46.4 E Expressgnn Training And Hyperparameter Setting
As already discussed in Section 4, in order to prove the broad applicability of proposed augmentations
(AUG) in our work, we perform experiments with ExpressGNN model as another baseline in Table 2.
In this section we provide the details of this experiment. The current implementation of **ExpressGNN**
model scales poorly with the number of rules, necessitating the use of a much smaller ruleset size.
We generate ruleset for each dataset by selecting the top 5 − 10 rules per relation (in the rule head)
from RNNLogic rules for that dataset (**ORIG**) based on the PCA score. This results in 417 rules for WN18RR, 500 rules for Kinship and 460 rules for UMLS. We perform augmentations on these rules and further maintain a threshold of the PCA score to be 0.95 while filtering RW rules. After augmentation, we obtain 1734 rules for Kinship, 2058 rules for UMLS and 828 rules for WN18RR. We also augment the training and the test set of ExpressGNN datasets with the inverse triples (t, r−1, h)
for each original (h, r, t) triple. Hyperparameters used for training are the optimal ones from the original paper. Results for FB15k-237 are omitted since ExpressGNN does not scale up to the augmented ruleset.
ExpressGNN assumes the knowledge of test queries at training time to construct its Markov Logic Network. For the test triple (h, r, t), this informs the model that h is a potential head and t is a potential tail entity for given relation r, even though this information might not be present in the training data. Hence, results presented in Table 2 are not directly comparable to results in Table 1.
## F Rule Generation Via Random Walks
Because rules generated by employing random walks form a distinct ruleset in the main paper ([RNN]-(RW)), we explain the statistics of these rules in detail in a dedicated section here. In order to determine the number of rules generated from the random walks, we calculate the difference of the column '\#Rules + INV + ABD' and '\#Rules + INV +
ABD + RW' in the Table 8 and summarize the resulting statistics of the number of RW rules created for each dataset in the Table 13. When compared to Table 8, we note that although random walk rules (RW)
comprise less than 8% of the augmented ruleset for all the datasets, these rules are still pivotal. This is because we notice a considerable decrease in performance after removing these rules as observed in Table 4, Table 10 and Table 11.
Table 13: Number of random walk rules (RW) generated per dataset in the experiments
| **Dataset** | | FB15K-237 | | WN18RR | Kinship | UMLS | |:-------------------|:-------------------:|:-------------------:|:-------------------:|:-------------------:|:-------------------:| | \#RW Rules | $2687$ | | $2425$ | | $14219$ | | $10313$ | | |
## G Rnnlogic+ Training And Hyperparameter Setting
Here we describe the training of RNNLogic+
model that is utilized in Table 1 and complementary Table 9. We use the same methodology for training RNNLogic+ model as in the original work (Qu et al., 2021). New rule embeddings are created for all the rules that are added to the rule set after rule augmentation. Rule embedding dimension is set to 16 (compared to 32 in original RNNLogic+) across datasets to mitigate the effect of the increased number of parameters in the model due to new rule embeddings. Results reported are for a single run with fixed seed over 5 epochs of training.
The hyperparameter η in Equation (2) representing the relative weight is set to 0.01, 0.05, 0.1 and 0.5 for WN18RR, FB15k-237, UMLS and Kinship respectively. The RotatE embedding dimension is set to 200, 500, 1000 and 2000 for WN18RR,
FB15k-237, UMLS and Kinship respectively. We keep a consistent batch size of 8, 4, 32 and 16 for WN18RR, FB15k-237, UMLS and Kinship respectively. The number of parameters for RNNLogic+
scales with the rule embedding size and the number of rules, reaching a maximum of 16*298446 =
4775136 for FB15k-237 after augmentations and filtering (leading to a training time of around 23 hours). As we can see, augmentation adds new rules leading to increase in the parameters of the model. All training was carried out on a single Tesla V100 GPU. The optimal values of all the hyper-parameters was found by tuning on validation set on each dataset.
## H Detailed Results On Proposed Augmentations
Results in Table 9 are supplementary to results already presented in Table 1. In addition to MRR,
Hits@1 and Hits@10 presented in the Table 1 in the Experiment section, we also present Mean Rank
(MR) and Hits@3 here. As discussed already in Section 4, AUG includes abduction (ABD), inversion
(INV), rule filtering (FIL) and random walk augmentation (RW).
In Table 9, we observe that there is a consistent improvement in the performance of the model for all the metrics after rule augmentation and filtering
(AUG). Notably, for the two new metrics introduced in Table 9, we obtain a performance gain of 3.7 point on Hits@3 and 40.4% on MR for FB15K-237 dataset and [RNN]-(RW) baseline. Since the original rules for the random walk baseline are lesser in number, [RNN]-(RW) and [RNN + **RotE**] - (RW) benefit more from augmentation. We also observe that for Kinship and UMLS, [RNN + **RotE**] - (RW) gives better performance than [RNN + RotE] - (**Orig**),
highlighting the quality of the rules discovered by local random walks followed by PCA filtering.
## I Detailed Results Of Ablation Study
Results in Table 10 are supplementary to results already presented in Table 4. Besides the three metrics presented in Table 4, we present Hits@3 and MR in this table. Additionally, we also demonstrate results of ablation on WN18RR dataset in Table 11. Ablation is not performed on FB15K-237 due to computational constraints. As with the other metrics, Hits@3 and MR is the most affected by abductive rules in UMLS and WN18RR because abduction results in augmenting the ruleset with a large number of high-quality rules (see Table 3).
Furthermore, Hits@3 and MR gets most affected by PCA-based random walk augmentation in Kinship dataset. This is because Kinship is a dense dataset, and a large number of high-quality rules are quickly discovered by the random walks.
## J Detailed Results Of Rule Generation Vs Rule Augmentation
Results in Table 14 are supplementary to the results already presented in Table 6. Here we present Hits@3 and MR as two additional metrics for analyzing the need for rule augmentation.
We generate rules by training RNNLogic model.
We consider 80 rules per relation for each dataset from these rules and expand them by performing three augmentations and filtering. This results in total of 9867 rules for WN18RR and 18432 rules for Kinship data. Then, we train RNNLogic+ with RotatE ([RNN+**RotE**]) on these rules and compare the results with RNNLogic+ with RotatE model trained on 500 rules per relation without augmentations. We observe that model trained with augmented rules consistently performs better than model trained by merely increasing the number of rules generated, even for a comparable number of rules. Specifically, we observe that model trained with augmented rules shows 4 point Hit@1 gain in Kinship dataset over the model trained with merely increased rules. These results strengthens the hypothesis that it is more helpful to leverage a few high-quality augmented rules rather than exploiting a plethora of lower-quality rules for NeuroSymbolic KG Completion.
## K Qualitative Analysis Of The Augmented Rules
In this section, we present one logical rule generated after each augmentation step as examples. The rules are taken from the FB15K-237 dataset.
Table 14: Comparison of performance by rule augmentation with performance on the original rules on WN18RR
and Kinship. R/R and TR is number of rules per relation and total rules generated from RNNLogic respectively.
ABD represents abduction performed on original rules.
Dataset R/R TR ABD MR MRR Hits@1 Hits@3 Hits@10
WN18RR 80 9867 Yes **4701.61 49.0 44.9 50.5 56.7**
500 11000 No 4848.39 47.7 43.7 49.8 55.2
Kinship 80 18432 Yes **3.21 69.5 56.1 79.4 94.6**
500 25000 No 3.62 66.1 52.1 75.3 93.1
1. ABD: LivesIn(PersonA, **LocationB**) : −
PlayFor(PersonA, TeamC), Inverse_**Team**
_Location(TeamsC, **LocationB**)
2. INV: Inverse_Person_Language(Langu ageA, **PersonB**) : − Inverse_Film′s_Lang uage(LanguageA, FilmC), **StoryWritten**
By(FilmC, **PersonB**)
3. RW: Friends(PersonA, **PersonB**) : −
Friends(PersonA, PersonC), Inverse_
Producer(PersonC, FilmD), Writer(**FilmD**
, **PersonB**)
For example, the rule in ABD category states that a person will live in the same city as the team he plays for is located. Therefore, we conclude that the rules captured through augmentations can be human interpretable.
## L An Alternative Augmentation Strategy
Recall that in our proposed methodology
(Orig+AUG) in Section 3, we consider original rules (**ORIG**) and perform abduction (ABD) on the original rules. This is followed by rule inversion
(INV) over the original rules and abductive rules. Then, we introduce the random walk rules (RW)
as the final augmentation step in the proposed augmentations (AUG) for the original (**Orig**)
ruleset. In this section, we consider an alternative sequence of augmenting the ruleset where we consider both the original (**Orig**) and the random walk rules (RW) and apply abduction and rule inversion on both of them. We denote this setting as (Orig + **AUG2**). We report a comparison of
(Orig + **AUG2**) with (Orig + AUG) (Table 1) with
[RNN + **RotE**] as the baseline model in Table 15.
From the results in the table, we conclude that Orig + **AUG2** does not result in improvement over our original methodology of Orig + AUG. It also creates a larger ruleset, further slowing down the training of the model.
Table 15: Comparison of performance by exploring two
methodologies of augmentations: (Orig + AUG) and
(Orig + **AUG2**).
Dataset Augmentation MRR H@1 **H@10**
WN18RR Orig + AUG **55.0 51.0 63.5**
Orig + **AUG2** 54.4 50.2 62.9
Kinship Orig + AUG **72.9 59.9 96.4**
Orig + **AUG2** 71.1 58 95.8
## M Pca-Confidence Metric
In this section, we explain in detail, the PCAconfidence metric that has been employed to score the rules discovered through random walk in our third augmentation approach. This metric has also been used to score the augmented rules in Table 3.
PCA: The calculation of the metric utilizes a Partial Closed World assumption (Galárraga et al., 2013) and assumes that if we know one t for a given r and h in r(h, t), then we know all t′for that h and r. Let the rules under consideration be of the form B ⇒
r(h, t). Then the PCA-score **PCAConf**(B ⇒ r) is:
\#(h, t) : |**Path**(h, B, t)| > 0 ∧ r(h, t) ∈ P
\#(h, t) : |**Path**(h, B, t)| > 0 ∧ ∃t′: r(h, t′) ∈ P
Essentially, it is the number of positive examples, P, satisfied by the rule divided by the total number of (h, t) satisfied by the rule such that r(h, t′) is a positive example for some t′.
## N Foil-Score Metric
We employ a modification of FOIL as one of the evaluation metrics to assess the quality of rules produced by augmentation techniques (Q1) in Table 3.
FOIL-scoring metric is discussed in detail below.
FOIL: Let the rules be of the form B ⇒ r(h, t).
Let **Path**(h, B, t) be the set of paths from h to t that act as groundings for the rule body B. Under the Closed World assumption, we assume that all triples not in the training and test set are false.
Inspired by the First-Order Inductive Learner algorithm (Quinlan, 1990), we define FOIL score to assess the quality of a rule as follows:
$$\mathbf{F0}\mathbf{I}\mathbf{L}(\mathbf{B}\Rightarrow\mathbf{r})={\frac{\sum_{\mathbf{r}(\mathbf{h},\mathbf{t})\in\mathbf{P}}|\mathbf{Path}(\mathbf{h},\mathbf{B},\mathbf{t})|}{\sum_{(\mathbf{h},\mathbf{t})}|\mathbf{Path}(\mathbf{h},\mathbf{B},\mathbf{t})|}}$$
In the above equation, P represents the set of positive examples in the given KG. The key difference between the FOIL score proposed originally (Quinlan, 1990) and ours is that instead of considering the number of examples satisfied by the rule, we calculate the number of groundings of the rule. This is more in line with the score calculated by RNNLogic+, which considers the number of groundings as well. Ideally the rules should have larger number of groundings for positive triples in comparison to the other triples.
Typically, negative sampling is used to calculate these metrics (PCA in Appendix M and FOIL here)
as it is computationally expensive to compute exhaustive negative examples. However, we calculate these metrics by considering the entire knowledge graph, which is enabled by utilizing batching and sparse matrix operations on the adjacency graph.
We highlight that we are the first to show the utility of PCA Confidence and FOIL in the context of neuro-symbolic models. This makes our specific approach distinct from AMIE (Galárraga et al., 2013) and FOIL (Quinlan, 1990), and more targeted to our setting due to the changes in the method of computation.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section or justification Section Limitations, Page number 6
✓ A2. Did you discuss any potential risks of your work?
Section Ethics Statement, Page number 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section Abstract and 1 (Introduction), Page number 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
Section References, Page number 6 and Appendix C, Page number 7
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We have used publicly available code and rule files released by the authors of RNNLogic on Github, which has not been explicitly licensed. We have mentioned source of rule files in Appendix C, Page number 7.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The authors of the code for RNNLogic and ExpressGNN have not explicitly stated their intended use of code on Github. They only require potential users to cite their paper if they use the code, which we have done.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
All the datasets that we have used in our experiments are standard datasets and we have cited the creators for each one of them.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We have not created any new artifacts through our work. We have provided original and augmented rule sets used in our experiments in the submitted code.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A, Page number 7 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix G, Page number 10
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix G, Page number 10
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix G, Page number 10
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix D, Page number 9
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
lv-etal-2023-parameter | Parameter-efficient Weight Ensembling Facilitates Task-level Knowledge Transfer | https://aclanthology.org/2023.acl-short.24 | Recent studies show that large-scale pre-trained language models could be efficaciously adapted to particular tasks in a parameter-efficient manner. The trained lightweight set of parameters, such as adapters, can be easily stored and shared as a capability equipped with the corresponding models. Owning many lightweight parameters, we focus on transferring them between tasks to acquire an improvement in performance of new tasks, the key point of which is to obtain the similarity between tasks. In this paper, we explore 5 parameter-efficient weight ensembling methods to achieve such transferability and verify the effectiveness of them. These methods extract the information of datasets and trained lightweight parameters from different perspectives to obtain the similarity between tasks, and weight the existing lightweight parameters according to the comparability to acquire a suitable module for the initialization of new tasks. We apply them to three parameter-efficient tuning methods and test them on a wide set of downstream tasks. Experimental results show that our methods show an improvement of 5{\%}{\textasciitilde}8{\%} over baselines and could largely facilitate task-level knowledge transfer. |
## Parameter-Efficient Weight Ensembling Facilitates Task-Level Knowledge Transfer
Xingtai Lv1∗, Ning Ding2∗, Yujia Qin2, Zhiyuan Liu2,3,4,5†**, Maosong Sun**2,3,4,5†
1Department of Electronic Engineering, Tsinghua University 2Department of Computer Science and Technology, Tsinghua University 3BNRIST, Tsinghua University, 4Institute for Artificial Intelligence, Tsinghua University 5International Innovation Center of Tsinghua University, Shanghai
{lvxt20, dingn18, qyj20}@mails.tsinghua.edu.cn
{liuzy, sms}@tsinghua.edu.cn
## Abstract
Recent studies show that large-scale pretrained language models could be efficaciously adapted to particular tasks in a parameterefficient manner. The trained lightweight set of parameters, such as adapters, can be easily stored and shared as a capability equipped with the corresponding models. Owning many lightweight parameters, we focus on transferring them between tasks to acquire an improvement in performance of new tasks, the key point of which is to obtain the similarity between tasks. In this paper, we explore 5 parameter-efficient weight ensembling methods to achieve such transferability and verify the effectiveness of them. These methods extract the information of datasets and trained lightweight parameters from different perspectives to obtain the similarity between tasks, and weight the existing lightweight parameters according to the comparability to acquire a suitable module for the initialization of new tasks.
We apply them to three parameter-efficient tuning methods and test them on a wide set of downstream tasks. Experimental results show that our methods show an improvement of 5%~8% over baselines and could largely facilitate task-level knowledge transfer.
## 1 Introduction
Increasingly large pre-trained language models
(PTMs) (Bommasani et al., 2021; Han et al.,
2021; Raffel et al., 2020; Brown et al., 2020) have yielded exceptional performances on a variety of tasks but also suffer from prohibitive adaptation costs with full parameter fine-tuning. It is not a feasible choice to fine-tune all parameters of a colossal model for each specific downstream task and produce a corresponding instance at the same size. To overcome this obstacle, a branch of research, namely parameter-efficient tuning, has
∗equal contributions †corresponding authors been actively developed and explored (Ding et al.,
2023; Houlsby et al., 2019; Li and Liang, 2021; Lester et al., 2021; Hu et al., 2021).
It demonstrates that only optimizing a tiny portion of parameters and keeping the PTM frozen could achieve on-par performance with full parameter fine-tuning on many tasks. After training, the set of updated parameters is lightweight and portable for storing and sharing. Although the specific structures of these parameters may be different, we treat them in a unified perspective and call them *lightweight objects*. Once lightweight objects are trained, they can be adapted to specific datasets conditioned on a large-scale PTM and be placed aside for storage in a space-efficient manner. Due to its lightweight nature, it is pragmatic to build a platform to store and share such lightweight objects for various scenarios (Beck et al., 2022).
However, the maneuverability of the platform for storing and sharing lightweight objects is still not fully exploited. In the current paradigm, one could directly access and utilize lightweight objects trained on existing datasets but face hindrances in utilizing the knowledge of these objects for new datasets (Vu et al., 2021). Such existing lightweight objects can be a valuable resource, as the knowledge contained in them has the probability of transferring to similar tasks. In this paper, we focus on the transferring of lightweight objects and aim to leverage them to boost the performance of new datasets. We assume that more similar tasks can share more knowledge, so the key point is to acquire the similarity between existing lightweight objects and new tasks. Specifically, we first assess 3 straightforward approaches to facilitate parameter-efficient tuning on new datasets. Observing unsatisfactory results on such approaches, we develop a parameter-efficient weight ensembling framework that could produce a set of parameters according to new datasets and existing lightweight objects. Under the framework, we explore 5 270
![1_image_0.png](1_image_0.png)
specific methods as a comprehensive study.
Extensive experiments across 8 transferring approaches, 51 downstream datasets, and 3 parameterefficient tuning methods demonstrate that the parameter-based framework could considerably advance the performance compared to baseline methods. We further carry out experimental analysis to verify the compatibility and internal properties.
O2 O
On
## 2 Investigated Methods
We consider a scenario where pre-trained language models M are *frozen*. And for downstream tasks {T1, T2*, ...,* Tn}, the associated lightweight modules O∗ = {O1, O2*, ...,* On} are produced via parameter-efficient fine-tuning (e.g., LoRA (Hu et al., 2021), Adapter (Houlsby et al., 2019)).
Given a new task Tnew with a few examples, our goal is to explore the best approach to utilize pre-existing lightweight objects to cultivate the best-performing lightweight object Onew for the initialization of Tnew. We investigate 9 strategies to transfer lightweight objects across tasks, including 3 baselines and 5 particular methods under our parameter-efficient weight ensembling framework.
## 2.1 Baselines
Straightforward methods are directly averaging all objects and further, averaging objects of similar tasks. These 2 methods and the random initialization way (From Scratch) are simple and intuitive, and we treat them as baselines.
From Scratch. This approach is the common parameter-efficient tuning pipeline. We train a randomly initialized lightweight object on the training set of the new task and evaluate it on the test set.
Avg. of Checkpoints. This approach straightforwardly takes the average of the checkpoints of all existing lightweight objects as the initialization of the new lightweight object. And the new object is trained on the training set and evaluated on the test set, formally, Onew ← 1n POi.
Manual Division. We manually select tasks that are similar to the new task based on the similarity of the task data and then average the lightweight objects corresponding to these tasks and use the result for initialization. This method is employed by Friedman et al. 2021.
Web Search Environment
## 2.2 Parametric Efficient Weight Ensembling
为什么⽕⻋轨道总是停在岩⽯/卵⽯上?
Why do train tracks always stop on rocks/pebbles?
In accordance with the tenet of digging for more information about the transfer with less cost, we develop a parameter-efficient weight ensembling framework, the core of which is to obtain a similarity indicator. We explore 4 methods (Loss, KL-divergence, EL2N, and Cosine of Logits and Labels) acquiring the indicator mainly from the data and 1 method (GraNd) acquiring the indicator mainly from the weights under the framework.
We consider the procedure to exploit existing lightweight objects as a process of soft selection, similar to attention networks. Such a selection is conducted based on an indicator Sithat assesses the contribution of one existing lightweight object Oi on Tnew, the initialization of the new lightweight object can be obtained by Content Find key words…..
<page 1>
Why ordinary railroad tracks are paved with rocks?
为什么普通铁路铁轨上都会铺⽯⼦?
普通铁路采⽤的是有砟轨道,⽽没有"碎⽯⼦"(铁路上成为
" 道砟 ")的⾼铁采⽤的是⽆砟轨道。 1. 道砟的作⽤; 2. 传 统有砟轨道的优点;3. 传统有砟轨道的缺点….
同样是铁轨,为什么⽕⻋轨道下⾯会铺碎⽯
⼦,⾼铁轨道却不铺?
Final The same is the railway track, why are gravels paved under the train track, but not the high-speed rail track?
道砟的好处之⼀是能够降低列⻋经过时产⽣的震动、噪⾳
和热量…
$$\mathcal{O}_{\mathrm{new}}=\sum_{i=1}^{n}\operatorname{Softmax}(\mathcal{S}_{i}/\tau)\cdot\mathcal{O}_{i}\qquad(1)$$
Web Search Environment
i=1 where τ is the hyper-parameter of the temperature indicator. Next, we introduce instantiations that are explored in this paper to construct different S.
271
```
1/9
为什么⽕⻋轨道总是停在岩⽯/卵⽯上?
Why do train tracks always stop on rocks/pebbles?
```
Content Find key words…..
Loss. The output of the zero-shot loss function is a reasonable measurement under this circumstance.
We directly feed examples of the valid data of Tnew to Oi and compute the CrossEntropy loss Li without any optimization. The indicator is set to the opposite of the loss output Si = −Li.
KL-divergence. We first train a randomly initialized lightweight object O˜new on the training set of Tnew. Then we feed examples of the valid dataset of Tnew separately to Oi and O˜new. For the two lightweight objects, we take the representation of the final layer of each output token tij through a softmax function to obtain the corresponding probability distributions Pij and P˜ij . We then calculate the KL-divergence Kij to assess the similarity of the two distributions that further contributes to the final indicator. After iterating the process for all the tokens, we add up the KL-divergence and take the opposite as the indicator Si = −Pj Kij .
EL2N. Similar to the foregoing method of KLdivergence, for a lightweight object Oi, we obtain a probability distribution Pij for each output token tj in Tnew. At the same time, we construct a one-hot vector Vij of the ground truth label. In this approach, we directly calculate the Euclidean distance of Pij and Vij as a measurement, denoted as dij . The final indicator is the inverse of the summation of d for all the tokens Si = −Pj dij .
Cosine of Logits and Labels. The process of this method is basically the same as the last approach, except we use the cosine function to calculate the measurement between Pij and Oij .
GraNd. Gradients can be viewed as the amount of change a model needs to adapt to a specific task. Similar to the approach that involves loss function, we directly feed examples in Tnew to a lightweight object Oi and calculate the CrossEntropy loss.
Then for each layer of the lightweight object, we compute the gradient of parameters with respect to the cross-entropy loss. Then we calculate the sum of the squares of these gradients Gi and take the inverse of the rooting of the sum to get Si = −
√Gi.
The above methods consider the possible contributions of existing lightweight objects Oito Onew from various technical angles for knowledge transfer. In the empirical study of the next section, we show that our approaches could substantially outperform existing baselines and tap the potential of existing lightweight objects.
## 3 Experiments
In this section, we apply the aforementioned approaches in different scenarios for experimental comparisons and analysis.
## 3.1 Experiment Settings
We use T5base as the backbone model and choose 32 Question Answering (QA) tasks from CrossFit Gym (Ye et al., 2021) for evaluation. To further assess the generality of the investigated methods, we also experiment our methods on 19 more diverse tasks. All tasks are formulated into the text-to-text format. We iteratively treat each task as the upcoming new task and the remaining 31 tasks as existing tasks in the platform. In this way, we do 32 trials with different new tasks and get 32 results. After that, we average all 32 results as the final result. We randomly select a small amount of data from the original datasets of the new task.
Specifically, the data of the *i-th* new task Ti could be represented as a tuple of (Di train,Didev,Di test), and the sizes of Di train and Didev are both set to 16n for the n-classification tasks and 64 for other tasks. We apply our approaches to three popular parameterefficient tuning methods (Adapter (Houlsby et al.,
2019), LoRA (Hu et al., 2021), Prefix (Li and Liang, 2021)). Given that the PTM we mainly use is T5base, and that the prompt tuning (Lester et al.,
2021) method has significant convergence issues when applied to it, we choose not to experiment with prompt tuning method. Other experimental settings are shown in Appendix A.
## 3.2 Results And Analysis
Results on 32 QA Tasks. As reported in Table 1, we observe that: (1) the results of our approaches considerably outperform existing baselines in general, and the superiority holds for all three parameter-efficient methods. (2) The results of approaches that rely on the information from data
(e.g., Cosine of Logits and Labels) are generally better than those resorting to the information from the weights (GraNd). (3) EL2N and Cosine of Logits and Labels, which directly extract information from the difference between logits and labels, perform best in knowledge transfer. We suspect that it would be easier to extract task features directly from the data under the framework.
Results on 51 Diverse Tasks. In addition to the QA tasks, we expand the size of evaluation datasets to 51 to further evaluate the knowledge transfer
| Approach | Adapter | LoRA | Prefix |
|---------------------------------------|-------------|-------------|-------------|
| Baselines | | | |
| From Scratch | 31.7 | 31.5 | 30.1 |
| Avg. of Checkpoints | 32.9 | 33.8 | 31.8 |
| Manual Division | 34.9 | 35.4 | 32.3 |
| Parameter-efficient Weight Ensembling | | | |
| GraNd | 34.5 (-0.4) | 35.4 (+0.0) | 35.1 (+2.8) |
| Loss | 38.9 (+4.0) | 38.7 (+3.3) | 36.2 (+3.9) |
| KL-divergence | 37.4 (+2.5) | 37.1 (+1.7) | 35.7 (+3.4) |
| EL2N | 38.6 (+3.7) | 39.1 (+3.7) | 37.3 (+5.0) |
| Cosine of Logits and Labels | 40.4 (+5.5) | 39.5 (+4.5) | 37.2 (+4.9) |
across tasks, including classification, question answering, conditional generation, and others. Without losing generality, we use Adapter for the following experiments. As reported in Table 2, we could observe similar empirical conclusions as the QA experiments. In a more diverse setting, gradient information cannot reflect vital knowledge for cross-task transfer, resulting in unsatisfactory performance of GraNd. At the same time, the setting of more diverse tasks also makes knowledge transfer more difficult, which makes the gap between our approach and baseline slightly narrower.
From Scrach 31.7 35.3 Avg. of Checkpoints 32.9 41 Manual 34.9 45.6 GraNd 34.5 45 Loss 38.9 49.8 KL-divergence 37.4 47.7 EL2N 38.6 48.3 Cosine of Logits and Labels 40.5 49.6
![3_image_0.png](3_image_0.png)
Results with T5**large**. We also investigate the impact of the backbone model. As illustrated in Figure 2, by directly replacing the backbone model from T5base to T5large, we could observe that all the methods gain considerable improvements. Methods of parameter-efficient weight ensembling generally gain about 10% of improvement, indicating
| Approach | All Task |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|
| Baselines | |
| From Scratch | 41.7 |
| Avg. of Checkpoints | 41.8 |
| Manual Division | 44.6 |
| Parameter-efficient Weight Ensembling GraNd 43.9 (- 0.7) Loss 45.8 (+1.2) KL-divergence 44.7 (+ 0.1) EL2N 47.2 (+2.6) Cosine of Logits and Labels 47.8 (+3.2) | |
that our approaches allow for knowledge transfer under different backbone models and may achieve more significant results for large models.
Impact of the Number of Shots. In order to test whether our methods are effective under the data-rich scenario, we increase the amount of training data for the new task and conduct experiments similar to those described in **Results** on 32 QA Tasks. Specifically, we set the sizes of Di train and Didev to N × k for the N-classification tasks and 4k for other tasks. In this experiment, we set k to 32, 64, 128 and 512, respectively.
Without loss of generality, we apply our approaches only to adapter-tuning, and experiment with the From Scratch, Manual Division and Cosine of Logits and Labels approaches when k is 128 and 512. It is worth mentioning that we use different hyper-parameters in experiments with different amounts of data. We use the hyper-parameters in the *few-shot* line in Table 5 in Appendix A when k is 32 and 64 and use the hyper-parameters in the full data line in Table 5 in Appendix A when k is 128 and 512.
The specific results are listed in Table 3, from which we conclude that (1) the results of our approaches outperform existing baselines in general;
(2) the setting of more data makes the gain directly from original datasets more abundant, resulting in less gain from existing lightweight objects, which makes the gap of results between our approach and baseline narrower.
Analysis of Module Importance. To analyze the importance of different modules of the lightweight object in knowledge transfer, we experiment based on the modified GraNd approach, which can extract the information of a certain module more in-
| K | 32 | 64 | 128 | 512 |
|---------------------------------------|------|------|-------|-------|
| Baselines | | | | |
| From Scratch | 33.5 | 36.3 | 38.8 | 45.4 |
| Avg. of Checkpoints | 35.1 | 37.7 | - | - |
| Manual Division | 37.3 | 39.2 | 41.5 | 47.6 |
| Parameter-efficient Weight Ensembling | | | | |
| Loss | 40.5 | 42.6 | - | - |
| KL-divergence | 39.2 | 41.5 | - | - |
| EL2N | 41.1 | 43.0 | - | - |
| Cosine of Logits and Labels | 41.1 | 43.7 | 43.9 | 47.9 |
dependently. Specifically, we compute the gradient of parameters with respect to the cross-entropy loss for one particular part P of the lightweight object.
The base model we choose, T5base, consists of 12 encoder blocks and 12 decoder blocks, and every decoder block has 3 sub-layers (i.e., self-attention layer, cross-attention-layer, and feedforward layer).
Taking P as the Adapter layers in these 12 + 12 blocks in turn, we apply the modified **GraNd**
approach and acquire 24 results. The respective averages of 12 results corresponding to the encoder and 12 results corresponding to the decoder are listed in Table 4. Similarly, we take P as the Adapter layers in the 3 × 12 sub-layers in the decoder blocks, considering the results related to decoder are better than those related to encoder, in turn, and obtain 36 results. We respectively average the 12 results of self-attention layers, cross-attention layers, and feedforward layers and acquire 3 results that are listed in Table 4. All layer-wise results are shown in Appendix B.
These results could reflect the importance of different modules in parameter-efficient knowledge transfer. We observe (1) the results of the decoder blocks are higher than those of the encoder blocks, indicating more importance of the decoder;
(2) in decoder blocks, cross-attention layers produce lower results than self-attention layers and feed-forward layers, and it demonstrates that information that is propagated in the decoder is crucial for knowledge transfer.
## 4 Conclusion
This paper investigates task-level knowledge transfer under the scenario of parameter-efficient
| Self | | | | |
|-----------|----------|-------|-------|-------|
| Attention | | | | |
| Encoder | Decoder | | | |
| Block | Block | Layer | Cross | |
| Attention | FF Layer | | | |
| Layer | | | | |
| 32.7 | 36.2 | 36.95 | 33.88 | 37.01 |
tuning. We empirically explore 8 strategies to use existing lightweight objects to perform knowledge transfer for the adaptation of new tasks. Experimental results and analysis show that our methods could effectively utilize knowledge distributed in lightweight objects. We expect our exploration could facilitate the development and application of parameter-efficient tuning of large language models.
## Limitations
Our approaches that are developed in the parameterefficient weight ensembling framework, and experiments have the following limitations. First of all, our framework cannot efficiently extract information from the parameters of the trained lightweight objects, resulting in relatively unsatisfactory performance of the approach resorting to the information from the weights, i.e., GraNd. Furthermore, the modules that we focus on in our analysis of module importance are only blocks and sub-layers of the blocks. We have not probed finer modules, in which we speculate more precise information about transferring lightweight objects across tasks is concealed. Last, all tasks in our experiments are formulated into the text-to-text format, and we have not conducted analysis on tasks in other formats.
## Acknowledgements
This work is supported by the National Key R&D
Program of China (No. 2020AAA0106502),
National Natural Science Foundation of China(No. 62236011).
## References
Francesco Barbieri, Jose Camacho-Collados, Leonardo Neves, and Luis Espinosa-Anke. 2020. Tweeteval: Unified benchmark and comparative evaluation for tweet classification. *arXiv preprint* arXiv:2010.12421.
Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. 2020. Beat the ai:
Investigating adversarial human annotation for reading comprehension. Transactions of the Association for Computational Linguistics, 8:662–678.
Tilman Beck, Bela Bohlender, Christina Viehmann, Vincent Hane, Yanik Adamson, Jaber Khuri, Jonas Brossmann, Jonas Pfeiffer, and Iryna Gurevych. 2022.
AdapterHub playground: Simple and flexible fewshot learning with adapters. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 61–75, Dublin, Ireland. Association for Computational Linguistics.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In *Proceedings of the 2013* conference on empirical methods in natural language processing, pages 1533–1544.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. *arXiv preprint* arXiv:2108.07258.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Michael Chen, Mike D'Arcy, Alisa Liu, Jared Fernandez, and Doug Downey. 2019a. Codah: An adversarially-authored question answering dataset for common sense. In *Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for* NLP, pages 63–69.
Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019b. Tabfact: A largescale dataset for table-based fact verification. arXiv preprint arXiv:1909.02164.
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. *arXiv preprint* arXiv:1905.10044.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.
Thomas Diggelmann, Jordan Boyd-Graber, Jannis Bulian, Massimiliano Ciaramita, and Markus Leippold. 2020. Climate-fever: A dataset for verification of real-world climate claims. arXiv preprint arXiv:2012.00614.
Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. 2023. Parameterefficient fine-tuning of large-scale pre-trained language models. *Nature Machine Intelligence*, pages 1–16.
Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In Third International Workshop on Paraphrasing
(IWP2005).
Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017.
Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179.
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. Eli5:
Long form question answering. arXiv preprint arXiv:1907.09190.
Dan Friedman, Ben Dodge, and Danqi Chen. 2021.
Single-dataset experts for multi-dataset question answering. *ArXiv preprint*, abs/2109.13880.
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In *Proceedings of the 2nd Workshop on New* Frontiers in Summarization, pages 70–79.
Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. 2012. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In **SEM 2012: The First Joint* Conference on Lexical and Computational Semantics
- Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 394–398.
Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, et al. 2021. Pre-trained models: Past, present and future. *AI Open*, 2:225–250.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In *Proceedings of ICML*.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. *arXiv preprint* arXiv:2106.09685.
Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning.
arXiv preprint arXiv:1909.00277.
Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A
dataset for question answering via sentence composition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8082–8090.
Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018.
Scitail: A textual entailment dataset from science question answering. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 32.
Neema Kotonya and Francesca Toni. 2020. Explainable automated fact-checking for public health claims.
arXiv preprint arXiv:2010.09926.
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. *arXiv* preprint arXiv:1704.04683.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In *Proceedings of EMNLP*.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of ACL, pages 4582–4597, Online. Association for Computational Linguistics.
Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020. Birds have four legs?! numersense: Probing numerical commonsense knowledge of pre-trained language models. arXiv preprint arXiv:2005.00683.
Pekka Malo, Ankur Sinha, Pekka Korhonen, Jyrki Wallenius, and Pyry Takala. 2014. Good debt or bad debt: Detecting semantic orientations in economic texts. Journal of the Association for Information Science and Technology, 65(4):782–796.
Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In *Proceedings of the 7th* ACM conference on Recommender systems, pages 165–172.
Clara H McCreery, Namit Katariya, Anitha Kannan, Manish Chablani, and Xavier Amatriain. 2020. Effective transfer learning for identifying similar questions: matching user questions to covid-19 faqs. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 3458–3465.
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. *arXiv preprint arXiv:1809.02789*.
Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. *arXiv preprint* arXiv:1808.08745.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial nli: A new benchmark for natural language understanding. *arXiv preprint arXiv:1910.14599*.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? *arXiv preprint arXiv:1909.01066*.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21:1–
67.
Anna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. 2020. Getting closer to ai complete question answering: A set of prerequisite real tasks.
In *Proceedings of the AAAI conference on artificial* intelligence, volume 34, pages 8722–8731.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. *Communications of the ACM*, 64(9):99–106.
Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. Dream: A challenge data set and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics*, 7:217–231.
Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2019a. Quarel: A dataset and models for answering questions about qualitative relationships. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7063–
7071.
Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter Clark. 2019b. Quartz: An open-domain dataset of qualitative relationship questions. arXiv preprint arXiv:1909.03553.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. *arXiv preprint arXiv:1811.00937*.
Niket Tandon, Bhavana Dalvi Mishra, Keisuke Sakaguchi, Antoine Bosselut, and Peter Clark. 2019.
Wiqa: A dataset for" what if..." reasoning over procedural text. *arXiv preprint arXiv:1909.04739*.
Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2021. Spot: Better frozen model adaptation through soft prompt transfer. arXiv preprint arXiv:2110.07904.
William Yang Wang. 2017. " liar, liar pants on fire":
A new benchmark dataset for fake news detection.
arXiv preprint arXiv:1705.00648.
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R
Bowman. 2020. Blimp: The benchmark of linguistic minimal pairs for english. *Transactions of the Association for Computational Linguistics*, 8:377–392.
Johannes Welbl, Nelson F Liu, and Matt Gardner. 2017.
Crowdsourcing multiple choice science questions.
arXiv preprint arXiv:1707.06209.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. *arXiv preprint arXiv:1809.09600*.
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren.
2021. Crossfit: A few-shot learning challenge for cross-task generalization in nlp. arXiv preprint arXiv:2104.08835.
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. *arXiv preprint* arXiv:1808.05326.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830.
Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting few-sample bert fine-tuning. *arXiv preprint arXiv:2006.05987*.
Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth.
2019. " going on a vacation" takes longer than" going for a walk": A study of temporal commonsense understanding. *arXiv preprint arXiv:1909.03065*.
## A Experiments Details
In this section, we describe the experimental settings in detail. The 32 Question Answering tasks and 19 diverse tasks are listed in Table 6. All datasets are publicly-available and downloaded from huggingface datasets1. The lightweight objects which we use to produce the lightweight module for initialization are trained with full data, and the hyper-parameters of this experiment are listed in the line of *full data* in Table 5. For the approaches which require tuning the lightweight objects for a small number of steps first, we train 200 steps first and evaluate every 50 steps. For the other approaches, we choose 30 examples for the classification tasks or 50 examples (for other tasks) to generate indicators that will be used to synthesize the initialization for the upcoming task. The particular hyper-parameters are reported in Table 5. We also do all parameter fine-tuning experiment with full data, the test result of which is 54.2 We use Huggingface Transformers (Wolf et al.,
2020) and PyTorch (Paszke et al., 2019) for all the experiments. Our experiments are done with NVIDIA A100 (maximum GPU memory=39.58GB). Producing the lightweight object Onew of Tnew through our investigated methods takes approximately 15 minutes and occupies 10 GB GPU memory on average, while testing on 32 QA tasks takes approximately 11 hours and occupies 18 GB GPU memory on average. T5base model
(checkpoints released by Lester et al. (2021)) contains 248 million parameters.
Table 5: Hyper-parameter setting. The line of *few-shot* shows hyper-parameter of the experiments in which we test our approaches, while the line of *full data* shows hyper-parameter of the experiments in which we get the lightweight objects.
| hyper-parameter | few-shot | full data |
|---------------------|------------|-------------|
| learning rate | 5e-4 | 5e-4 |
| batch size | 8 | 16 |
| earlystop steps | 10 | 20 |
| evaluation interval | 100 | 1000 |
| adapter size | 12 | 12 |
| lora size | 10 | 10 |
| prefix r | 24 | 24 |
| prefix num | 120 | 120 |
Question Answering / Machine Reading Comprehension adversarialqa (Bartolo et al., 2020) hotpot_qa (Yang et al., 2018)
superglue-record (Zhang et al., 2020)
Question Answering / Multiple-choice Question Answering ai2_arc (Clark et al., 2018)
codah (Chen et al., 2019a) commonsense_qa (Talmor et al., 2018) cosmos_qa (Huang et al., 2019)
dream (Sun et al., 2019)
hellaswag (Zellers et al., 2019) openbookqa (Mihaylov et al., 2018) qasc (Khot et al., 2020) quail (Rogers et al., 2020)
quarel (Tafjord et al., 2019a)
quartz-no_knowledge (Tafjord et al., 2019b) quartz-with_knowledge (Tafjord et al., 2019b) race-high (Lai et al., 2017)
race-middle (Lai et al., 2017) sciq (Welbl et al., 2017)
superglue-copa (Gordon et al., 2012)
swag (Zellers et al., 2018) wino_grande (Sakaguchi et al., 2021) wiqa (Tandon et al., 2019)
Question Answering / Binary boolq (Clark et al., 2019)
mc_taco (Zhou et al., 2019)
Question Answering / Long-form Question Answering eli5-askh (Fan et al., 2019)
eli5-asks (Fan et al., 2019)
eli5-eli5 (Fan et al., 2019)
Question Answering / Closed-book Question Answering lama-conceptnet (Petroni et al., 2019)
lama-google_re (Petroni et al., 2019) numer_sense (Lin et al., 2020)
search_qa (Dunn et al., 2017) web_questions (Berant et al., 2013)
Classification / Sentiment Analysis amazon_polarity (McAuley and Leskovec, 2013) financial_phrasebank (Malo et al., 2014)
Classification / Nli anli (Nie et al., 2019)
scitail (Khot et al., 2018)
Classification / Fact Checking climate_fever (Diggelmann et al., 2020)
health_fact (Kotonya and Toni, 2020)
liar (Wang, 2017) tab_fact (Chen et al., 2019b)
Classification / Emotion tweet_eval-offensive (Barbieri et al., 2020)
tweet_eval-sentiment (Barbieri et al., 2020)
tweet_eval-irony (Barbieri et al., 2020)
Classification / Paraphrase glue-mrpc (Dolan and Brockett, 2005)
glue-qqp medical_questions_pairs (McCreery et al., 2020)
Conditoinal Generation / Summarization samsum (Gliwa et al., 2019) xsum (Narayan et al., 2018)
Others / Linguistic Phenomenon blimp-ellipsis_n_bar_1 (Warstadt et al., 2020)
blimp-irregular_past_participle_adjectives
(Warstadt et al., 2020)
blimp-sentential_negation_npi_scope
(Warstadt et al., 2020)
Table 6: All the tasks which we use in the experiments.
The first 32 tasks are Question Answering tasks, and the the last 19 tasks are other diverse tasks.
278
## B **Details And Extension Of The Analysis Of** Module Importance
All the 60 experimental results described in **Analysis of Module Importance.** in Experiments 3.2 are listed in Table 7 (24 results corresponding to blocks) and Table 8 (36 results corresponding to layers). There exist some results that are satisfactory compared to the results listed in Table 1, which indicates the potential of the GraNd approach.
| Block | Encoder | Decoder |
|---------|-----------|-----------|
| Number | Block | Block |
| 0 | 31.9 | 37.5 |
| 1 | 32.6 | 35.9 |
| 2 | 31.7 | 37.1 |
| 3 | 32.4 | 35.9 |
| 4 | 33.9 | 35.1 |
| 5 | 32.3 | 37.4 |
| 6 | 33.8 | 36.0 |
| 7 | 33.0 | 34.5 |
| 8 | 32.6 | 36.9 |
| 9 | 32.1 | 36.9 |
| 10 | 32.5 | 34.6 |
| 11 | 33.7 | 36.6 |
| average | 32.7 | 36.2 |
Table 7: The (test) results of parallel experiment where we apply the modified GraNd approach on different blocks of the PLM.
| Block | SelfAttention CrossAttention FF | | |
|---------|-----------------------------------|-------|-------|
| Number | Layer | Layer | Layer |
| 0 | 37.9 | 37.2 | 35.7 |
| 1 | 37.1 | 36.8 | 37.2 |
| 2 | 35.6 | 34.1 | 37.6 |
| 3 | 35.4 | 35.1 | 38.3 |
| 4 | 37.5 | 32.0 | 37.0 |
| 5 | 33.3 | 37.0 | 38.3 |
| 6 | 38.1 | 35.4 | 38.3 |
| 7 | 38.0 | 31.6 | 37.6 |
| 8 | 37.5 | 30.9 | 38.3 |
| 9 | 36.8 | 32.2 | 36.5 |
| 10 | 37.9 | 32.7 | 35.4 |
| 11 | 38.3 | 31.5 | 33.9 |
| average | 36.95 | 33.88 | 37.01 |
Table 8: The (test) results of parallel experiment where we apply the modified GraNd approach on different layers in the decoder blocks of the PLM.
Beyond the experiments based on the GraNd approach, we also apply the modified Cosine of Table 9: The (test) results of the parallel experiment where we apply the modified Cosine of Logits and Labels approach on different blocks of the PLM.
Logits and Labels approach to analyze the module importance from the perspective of the model output. Specifically, we first train a randomly initialized lightweight object on Tnew to obtain the fine-tuned lightweight object O˜. Then we feed examples of valid dataset of Tnew separately to Oi and O˜. For the two lightweight objects, we take the output hidden states Hi and H˜ of one particular module P through a cosine function to acquire the indicator Si. Then under our framework of testing tasks, we obtain a result corresponding to P. Taking P as the adapters in the 12 + 12 blocks of the base model we choose, i.e., T5*base*, we acquire 24 results, which are shown in Table 9. These results could reflect the importance of different modules, from which we observe the results of the decoder blocks are higher than those of the encoder blocks, indicating more importance of the decoder.
## C Approaches Extracting Information From The Weights
| Block | Encoder | Decoder |
|---------|-----------|-----------|
| Number | Block | Block |
| 0 | 31.9 | 33.9 |
| 1 | 32.6 | 34.2 |
| 2 | 33.0 | 34.1 |
| 3 | 33.5 | 34.9 |
| 4 | 33.5 | 33.7 |
| 5 | 33.5 | 34.0 |
| 6 | 33.4 | 34.7 |
| 7 | 33.7 | 34.8 |
| 8 | 34.4 | 35.2 |
| 9 | 34.0 | 34.2 |
| 10 | 34.2 | 33.7 |
| b 11 | 32.0 | 34.0 |
| average | 33.31 | 34.28 |
Beyond the GraNd approach, we also explore 3 specific methods resorting to the information from the weight, Cosine, Euclidean, and Performance.
Cosine. We train a randomly initialized lightweight object O˜new on the training set of Tnew. Then for existing lightweight object Oi ∈ O∗, we calculate the cosine similarity between O˜new and Oi as the indicator. The final initialization of the new lightweight object is the weighted average according to the cosine similarity after softmax Onew ←Pn i=1 Softmax[cos(O˜new, Oi)]·Oi.
279
| Approach | Adapter | LoRA | Prefix |
|-------------|-------------|-------------|-------------|
| Cosine | 33.3 (-1.6) | 32.8 (-2.6) | 32.1 (-0.2) |
| Euclidean | 34.8 (-0.1) | 34.4 (-1.0) | 32.7 (+0.4) |
| Performance | 35.1 (+0.2) | 34.4 (-1.0) | 34.8 (+2.5) |
Table 10: Test results of Cosine, Euclidean and Performance. Numbers in parentheses are the difference between the method and the best-performing baseline listed in Table 1.
This approach is also adopted in the transfer of soft prompts by Vu et al. 2021.
Euclidean. We develop this approach by following a basic intuition that the more a lightweight object's parameters change after training on the new dataset Tnew, then the less relevant it is to Tnew.
We first train Oi on the new task Tnew for m steps and select the best-performing lightweight object O˜iin this process. Then, we directly calculate the Euclidean distance between each layer of Oi and O˜ito get the distance after summation di. In this method, the indicator is the inverse of the distance Si = −di. The final lightweight object Onew is obtained by Eq. 1.
Performance. This approach follows the foregoing insight and uses the change in performance to measure the correlation between a lightweight object Oi and Tnew. In the beginning, we directly produce the zero-shot performance of Oi on Tnew without any training, which is denoted as zi. Then we train Oi on Tnew for m steps and select the best performance bi. The indicator is computed by the difference between two numbers Si = bi − zi.
Although the indicator Si of the Performance approach is the difference between two performances, the original cause of this difference is the change of parameters of the lightweight object, because of which we treat the Performance method as the approach extracting information from the weights.
We apply these 3 approaches to three parameterefficient tuning methods and experiment on 32 QA
tasks, the results of which are shown in Table 10.
The performance of these approaches is slightly poor. We suspect the reason may be that the information about transferring lightweight objects across tasks contained in weights is more covert, and our framework cannot efficiently extract it.
## D Analysis Of Utilizing Best Single Lightweight Object
In this section, we probe the performance of transferring knowledge with best single lightweight ob-
| Approach | Adapter | LoRA | Prefix |
|-----------------------------|-------------|-------------|-------------|
| GraNd | 34.3 (-0.6) | 32.6 (-2.8) | 34.8 (+2.5) |
| Loss | 39.8 (+4.9) | 37.9 (+2.5) | 37.2 (+4.9) |
| KL-divergence | 37.4 (+2.5) | 36.5 (+1.1) | 35.6 (+3.3) |
| EL2N | 39.9 (+5.0) | 38.2 (+2.8) | 37.3 (+5.0) |
| Cosine of Logits and Labels | 40.0 (+5.1) | 38.1 (+2.7) | 37.7 (+5.4) |
ject Obest. For a new task Tnew, Obest refers to the existing lightweight object Oi with the highest similarity indicator Si, and we utilize Obest for the initialization of Tnew (i.e., Onew = Obest).
We modify the original framework in accordance with the above description, and experiment with our parameter-efficient weight ensembling approaches, the results of which are shown in Table 11. Generally, the performance of obtaining Onew by Obest is similar to that of the primal framework, which is consistent with the experimental phenomenon that Obest occupies basically 95% weight when cultivating Onew in our original framework.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Page 5, after the Conclusion, and before the References.
✗ A2. Did you discuss any potential risks of your work?
Our work is only about algorithms, it is almost impossible to exist potential risks, so we didn't discuss them in our paper.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The abstract is at the beginning of the paper, and the section number of the introduction is 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We use the datasets of 51 tasks in Experiments(the section number is 3), and we list the name of thedatasets in Appendix A(in page 8).
✓ B1. Did you cite the creators of artifacts you used?
We cite the datasets in Appendix A(in page 8).
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In Appendix A(in page 8)
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** The Section Number Is 3(Experiments).
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We report them in Appendix A(in page 8).
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We discuss the experimental setup in Experiments(the section number is 3) and Appendix A(in page 8).
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We report them in Experiments(the section number is 3) and Appendix B.C,D(in page 8-10).
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We discuss the experimental setup in Appendix A(in page 8).
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
atanasova-etal-2023-faithfulness | Faithfulness Tests for Natural Language Explanations | https://aclanthology.org/2023.acl-short.25 | Explanations of neural models aim to reveal a model{'}s decision-making process for its predictions. However, recent work shows that current methods giving explanations such as saliency maps or counterfactuals can be misleading, as they are prone to present reasons that are unfaithful to the model{'}s inner workings. This work explores the challenging question of evaluating the faithfulness of natural language explanations (NLEs). To this end, we present two tests. First, we propose a counterfactual input editor for inserting reasons that lead to counterfactual predictions but are not reflected by the NLEs. Second, we reconstruct inputs from the reasons stated in the generated NLEs and check how often they lead to the same predictions. Our tests can evaluate emerging NLE models, proving a fundamental tool in the development of faithful NLEs. | # Faithfulness Tests For Natural Language Explanations
Pepa Atanasova1, Oana-Maria Camburu2, Christina Lioma1**, Thomas Lukasiewicz**3,4, Jakob Grue Simonsen1**, Isabelle Augenstein**1 1Department of Computer Science, University of Copenhagen, Denmark 2University College London, UK 3TU Wien, Austria 4University of Oxford, UK
[email protected]
## Abstract
Explanations of neural models aim to reveal a model's decision-making process for its predictions. However, recent work shows that current methods giving explanations such as saliency maps or counterfactuals can be misleading, as they are prone to present reasons that are unfaithful to the model's inner workings. This work explores the challenging question of evaluating the faithfulness of natural language explanations (NLEs). To this end, we present two tests. First, we propose a counterfactual input editor for inserting reasons that lead to counterfactual predictions but are not reflected by the NLEs. Second, we reconstruct inputs from the reasons stated in the generated NLEs and check how often they lead to the same predictions. Our tests can evaluate emerging NLE models, proving a fundamental tool in the development of faithful NLEs.
## 1 Introduction
Explanations of neural models aim to uncover the reasons behind model predictions in order to provide evidence on whether the model is trustworthy.
To this end, explanations have to be *faithful*, i.e., reflect the decision-making process of the model, otherwise, they could be harmful (Hancox-Li, 2020).
However, recent studies show that explanations can often be unfaithful, covering flaws and biases of the model. Adebayo et al. (2018) show that certain widely deployed explainability approaches that provide saliency maps (with importance scores for each part of the input, e.g., words or super-pixels)
can even be *independent* of the training data or of the model parameters. Others also question the effectiveness and reliability of counterfactuals (Slack et al., 2021), concept activations, and training point ranking explanations (Adebayo et al., 2022).
In this work, we investigate the degree of faithfulness of natural language explanations (NLEs),
which explain model predictions with free text.
NLEs are not constrained to contain only input 283 segments, thus they provide more expressive (Camburu et al., 2021) and usually more human-readable explanations than, e.g., saliency maps (Wiegreffe and Marasovic, 2021). Evaluating the faithfulness of explanations is very challenging in general, as the ground-truth reasons used by a model for a prediction are usually unknown. Evaluating the faithfulness of NLEs is further complicated, as they often include words not present in the input. Thus, existing tests evaluating other types of explanations, e.g., saliency maps, cannot be directly applied to NLEs. As a stepping stone towards evaluating how faithful NLEs are, we design two tests. Our first test investigates whether NLE models are faithful to reasons for counterfactual predictions. We introduce a *counterfactual input editor* that makes counterfactual interventions resulting in new instances on which the model prediction changes but the NLE does not reflect the intervention leading to the change. Our second test reconstructs an input from the reasons stated in a generated NLE, and checks whether the new input leads to a different prediction. We apply our tests to four NLE models over three datasets. We aim for our tests to be an important tool to assess the faithfulness of existing and upcoming NLE models.1
## 2 The Faithfulness Tests
Given a dataset X=(xi, ei, yi), with an input xi, a gold NLE ei, and a gold label yi ∈ L, where L is the set of all labels for X, a model f is trained to produce an NLE and a task prediction for each input: f(xi) = (eˆi, yˆi). We also refer to eˆi as f(xi)ex and to yˆi as f(xi)p.
2.1 The Counterfactual Test: Are NLE models faithful to reasons for counterfactual predictions? Studies in cognitive science show that humans usually seek counterfactuals by looking for 1The code is available at https://github.com/
copenlu/nle_faithfulness
| Test | Original Instance | Instance After Test Intervention |
|-----------|---------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Counterfactual | Premise: Man in a black suit, white shirt and black bowtie playing an instrument with the rest of his symphony surrounding him. | |
| test (§2) | Hypothesis: A tall person in a suit. Prediction: neutral NLE: Not all men are tall. | Premise: Man in a black suit, white shirt and black bowtie playing an instrument with the rest of his symphony surrounding him. ➜Hypothesis: A tall person in a blue suit. Prediction: contradiction ✗ NLE: A man is not a tall person. Unfaithfulness cause: inserted word 'blue' ∈/ NLE but changed the prediction. |
| Input reconstruction test (§2) | Premise: Many people standing outside of a place talking to each other in front of a building that has a sign that says 'HI-POINTE.' Hypothesis: The people are having a chat before going into the work building. Prediction: neutral NLE: Just because people are talking does not mean they are having a chat. | ➜Premise: People are talking. ➜Hypothesis: They are having a chat. ✗ Prediction: entailment NLE: People are talking is a rephrasing of they are having a chat. Unfaithfulness cause: The reasons in the NLE for the original instance lead to a different prediction. |
factors that explain why event A occurred instead of B (Miller, 2019). Counterfactual explanations were proposed for ML models by making interventions either on the input (Wu et al., 2021; Ross et al., 2021) or on the representation space (Jacovi et al., 2021). An intervention h(xi, yC
i
) = x0i is produced over an input instance xi w.r.t. a target counterfactual label y C
i
, yC
i6= ybi, such that the model predicts the target label: f(x0i
) = by0i = y C
i
.
For our test, we search for interventions that insert tokens into the input such that the model gives a different prediction, and we check whether the NLE reflects these tokens. Thus, we define an intervention h(xi, yC
i
) = x0i that, for a given counterfactual label y C
i
, generates a set of words W={wj} that, inserted into xi, produces a new instance x0i = {xi,1, . . . xi,k, W, xi,k+1*, . . . x*i,|xi|}
such that f(x0i
)p = y C
i
. While one can insert each word in W at a different position in xi, here we define W to be a *contiguous* set of words, which is computationally less expensive. As W is the counterfactual for the change in prediction, then at least one word from W should be present in the NLE for the counterfactual prediction:
$$h(x_{i},y_{i}^{C})=x_{i}^{\prime}$$ $$x_{i}^{\prime}=\{x_{i,1},\ldots x_{i,k},W,x_{i,k+1},\ldots x_{i,|x_{i}|}\}$$ $$f(h(x_{i},y_{i}^{C}))=f(x_{i}^{\prime})=y_{i}^{C}\neq\widehat{y}_{i}=f(x_{i})$$ If $W\cap^{*}\widehat{e}_{i}^{\prime}=\emptyset$, then $\widehat{e}_{i}^{\prime}$ is unfaithful,
$$\mathrm{(1)}$$
where the s superscript indicates that the operator is used at the semantic level. Sample counterfactual interventions satisfying Eq. 1 are in Table 1. More examples are in Tables 4 and 5 in the Appendix.
To generate the input edits W, we propose an editor h as a neural model and follow Ross et al. (2021). The authors generate input edits that change the model prediction to target predictions and refer to these edits as explanations. We note that besides the input edits, confounding factors could cause the change in prediction, e.g., the edits could make the model change its focus towards other parts of the input and not base its decision on the edit itself. In this work, we presume that it is still important for the NLEs to point to the edits, as the model changed its prediction when the edit was inserted. This aligns with the literature on counterfactual explanations, where such edits are seen as explanations (Guidotti, 2022). We also hypothesize that confounding factors are rare, especially when insertions rather than deletions are performed. We leave such investigation for future work.
During the training of h, we mask n1% tokens in xi, provide as an input to h the label predicted by the model, i.e., y C
i = ybi, and use the masked tokens to supervise the generation of the masked text (corresponding to W). During inference, we provide as target labels y C
i ∈ *Y, y*C
i6= ybi, and we search over n2 different positions to insert n3 candidate tokens at each position at a time. The training objective is the cross-entropy loss for generating the inserts.
We use as a metric of unfaithfulness the percentage of the instances in the test set for which h finds counterfactual interventions that satisfy Eq. 1.
To compute this automatically, we use ∩
sat the syntactical level. As paraphrases of W might appear in the NLEs, we manually verify a subset of NLEs. We leave the introduction of an automated evaluation for the semantic level for future work.
Our metric is not a complete measure of the overall faithfulness of the NLEs, as (1) we only check whether the NLEs are faithful to the reasons for counterfactual predictions, and (2) it depends on the performance of h. But if h does not succeed in finding a significant number of counterfactual rea-
Model %Counter **%Counter**
Unfaith**%Total**
Unfaith
e-SNLI
MT-Re-Rand 38.85 **60.39** 23.46
MT-Re-Edit **56.70** 46.12 **26.15**
MT-Re-Rand+Edit 64.98 53.29 34.63
ST-Re-Rand 37.14 **54.26** 20.15
ST-Re-Edit **49.64** 52.74 **26.18**
ST-Re-Rand+Edit 61.15 58.27 35.63
MT-Ra-Rand 37.17 **54.93** 20.42
MT-Ra-Edit **55.04** 41.34 **22.75**
MT-Ra-Rand+Edit 63.84 48.63 31.05 ST-Ra-Rand 35.21 **57.82** 20.36
ST-Ra-Edit **60.00** 45.66 **27.39**
ST-Ra-Rand+Edit 67.31 55.03 37.04
CoS-E
MT-Re-Rand 44.89 **83.18** 37.34
MT-Re-Edit **50.00** 77.23 **38.62**
MT-Re-Rand+Edit 59.89 85.26 51.06 ST-Re-Rand 52.34 79.47 41.60
ST-Re-Edit **53.83 86.17 46.38**
ST-Re-Rand+Edit 67.45 87.54 59.04 MT-Ra-Rand 39.26 **84.01** 32.98
MT-Ra-Edit **50.00** 78.72 **39.36**
MT-Ra-Rand+Edit 56.81 85.58 48.62
ST-Ra-Rand 46.70 **75.85** 35.43
ST-Ra-Edit **52.02** 75.05 **39.04**
ST-Ra-Rand+Edit 63.62 81.77 52.02
ComVE
MT-Re-Rand 35.60 **83.43** 29.70
MT-Re-Edit **50.90** 70.53 **35.90**
MT-Re-Rand+Edit 61.10 78.89 48.20
ST-Re-Rand 41.90 74.22 31.10
ST-Re-Edit **48.40 76.45 37.00** ST-Re-Rand+Edit 62.90 77.42 48.70
MT-Ra-Rand 33.70 **75.67** 25.50
MT-Ra-Edit **47.20** 66.53 **31.40**
MT-Ra-Rand+Edit 58.10 73.84 42.90 ST-Ra-Rand 36.30 **80.17** 29.10
ST-Ra-Edit **49.50** 79.80 **39.50**
ST-Ra-Rand+Edit 61.80 83.98 51.90
sons not reflected in the NLEs, it could be seen as evidence of the faithfulness of the model's NLEs.
2.2 The Input Reconstruction Test: Are the reasons in an NLE sufficient to lead to the same prediction as the one for which the NLE was generated? Existing work points out that for an explanation to be faithful to the underlying model, the reasons ri*in the explanation* should be *sufficient* for the model to make the same prediction as on the original input (Yu et al., 2019):
| Model | % Reconst | % Total Unfaith | |
|---------|-------------|-------------------|------|
| e-SNLI | MT-Re | 39.49 | 7.7 |
| ST-Re | 39.99 | 9.7 | |
| MT-Ra | 44.87 | 7.8 | |
| ST-Ra | 43.32 | 9.3 | |
| ComVE | MT-Re | 100 | 36.9 |
| ST-Re | 100 | 22.7 | |
| MT-Ra | 100 | 40.3 | |
| ST-Ra | 100 | 28.5 | |
where R is the function that builds a new input ri given xi and ebi. Sufficiency has been employed to evaluate saliency explanations, where the direct mapping between tokens and saliency scores allows rito be easily constructed (by preserving only the top-N most salient tokens) (DeYoung et al.,
2020; Atanasova et al., 2020a). For NLEs, which lack such direct mapping, designing an automated extraction R of the reasons in ebiis challenging.
Here, we propose automated agents Rs that are task-dependent. We build Rs for e-SNLI (Camburu et al., 2018) and ComVE (Wang et al., 2020), due to the structure of the NLEs and the nature of these datasets. However, we could not construct an R for CoS-E (Rajani et al., 2019). For e-SNLI, a large number of NLEs follow certain templates.Camburu et al. (2020) provide a list of templates covering 97.4% of the NLEs in the training set. For example, "<X> is the same as <Y>" is an NLE template for entailment. Thus, many of the generated NLEs also follow these templates. In our test, we simply use <X> and <Y> from the templates as the reconstructed pair of premise and hypothesis, respectively. We keep only those <X> and <Y> that are sentences containing at least one subject and at least one verb. If the NLE for the original input was faithful, then we expect the prediction for the reconstructed input to be the same as for the original.
Given two sentences, the ComVE task is to pick the one that contradicts common sense. If the generated NLE is faithful, replacing the correct sentence with the NLE should lead to the same prediction.
## 3 Experiments
$r_{i}=R(x_{i},\hat{e_{i}})$ If $f(r_{i})_{p}\neq f(x_{i})_{p}$, then $\hat{e_{i}}$ is unfaithful,
$$(2)$$
Following Hase et al. (2020), we experiment with four setups for NLE models, which can be grouped by whether the prediction and NLE generation are trained with a multi-task objective using a joint model (MT) or with single-task objectives using separate models (ST). They can also be grouped by whether they generate NLEs conditioned on the predicted label (rationalizing models (Ra)), or not conditioned on it (reasoning models (Re)). The general notation f(xi) = (ebi, ybi) used in §2 includes all four setups:
**MT-Re:$f_{p,ex}(x_{i})=(\hat{e}_{i},\hat{y}_{i})$** **MT-Ra:$f_{p,ex}(x_{i})=(\hat{e}_{i|\hat{y}_{i}}^{3},\hat{y}_{i})$** **ST-Re:$f_{ex}(x_{i})=\hat{e}_{i};f_{p}(x_{i},\hat{e}_{i})=\hat{y}_{i}$** **ST-Ra:$f_{ex}(x_{i},y_{j})=\hat{e}_{i,j};f_{p}(x_{i},\hat{e}_{i})=\hat{y}_{j}$** $j=$ argmax${}_{j\in[1,...,|L|]}(f_{p}(x_{i},\hat{e}_{i,j}))$**,**
$$\quad(3)$$
where f*p,ex* is a joint model for task prediction and NLE generation, fp is a model only for task prediction, and fex is a model only for NLE generation.
The ST-Ra setup produces one NLE ei,j for each yj ∈ L. Given eci,j and xi, fp predicts the probability of the corresponding label yj and selects as ybi the label with the highest probability.
For both f and the editor h, we employ the pretrained T5-base model (Raffel et al., 2020). The editor uses task-specific prefixes for insertion and NLE generation. We train both f and h for 20 epochs, evaluate them on the validation set at each epoch, and select the checkpoints with the highest success rate (see §2). We use a learning rate of 1e-4 with the Adam optimizer (Kingma and Ba, 2014). For the editor, during training, we mask n1 consecutive tokens with one mask token, where n1 is chosen at random in [1, 3]. During inference, we generate candidate insertions for n2 = 4 random positions, with n3 = 4 candidates for each position at a time. The hyper-parameters are chosen with a grid search over the validation set.4 For the manual evaluation, an author annotated the first 100 test instances for each model (800 in total). The manual evaluation has been designed in accordance with related work (Camburu et al., 2018), which also evaluated 100 instances per model. We found that no instances were using paraphrases. Hence, in our work, the automatic metric can be trusted.
Baseline. For the counterfactual test, we incorporate a random baseline as a comparison. Specifically, we insert a random adjective before a noun or a random adverb before a verb. We randomly select n2 = 4 positions where we insert the said words, and, for each position at a time, we consider n3 = 4 random candidate words. The candidates are single words randomly chosen from the complete list of adjectives or adverbs available in WordNet (Fellbaum, 2010). We identify the nouns and verbs in the text with spaCy (Honnibal et al., 2020).
Datasets. We use three popular datasets with NLEs: e-SNLI (Camburu et al., 2018), CoS-E (Rajani et al., 2019), and ComVE (Wang et al., 2020).
e-SNLI contains NLEs for SNLI (Bowman et al., 2015), where, given a premise and a hypothesis, one has to predict whether they are in a relationship of *entailment* (the premise entails the hypothesis), *contradiction* (the hypothesis contradicts the premise), or *neutral* (neither entailment nor contradiction hold). CoS-E contains NLEs for commonsense question answering, where given a question, one has to pick the correct answer out of three given options. ComVE contains NLEs for commonsense reasoning, where given two sentences, one has to pick the one that violates common sense.
## 3.1 Results
Counterfactual Test. Table 2 shows the results of our counterfactual test. First, we observe that when the random baseline finds words that change the prediction of the model, the words are more often not found in the corresponding NLE compared to the counterfactual editor (% Counter Unfaith).
We conjecture that this is because the randomly selected words are rare for the dataset compared to the words that the editor learns to insert. Second, the counterfactual editor is better at finding words that lead to a change in the model's prediction, which in turn results in a higher percentage of unfaithful instances in general (% Total Unfaith).
We also observe that the insertions W lead to counterfactual predictions for up to 56.70% of the instances (for MT-Re-Edit on e-SNLI). For up to 46.38% of the instances (for ST-Re-Edit on CoSE), the editor is able to find an insertion for which the counterfactual NLE is unfaithful. Table 1, row 1, presents one such example. More examples for the random baseline can be found in Table 4, and for the counterfactual editor in Table 5. Finally, the union of the counterfactual interventions discovered by the random baseline and the editor, we observe total percentages of up to 59.04% unfaithfulness to the counterfactual.
We see that for all datasets and models, the total percentages of unfaithfulness to counterfactual are high, between 37.04% (for MT-Ra-Rand+Edit on eSNLI) and 59.04% (ST-Re-Rand+Edit for CoS-E).
We re-emphasize that this should not be interpreted as an overall estimate of unfaithfulness, as our test is not complete (see §2).
The Input Reconstruction Test. Table 3 shows the results of the input reconstruction test. We were able to reconstruct inputs for up to 4487 out of the 10K test instances in e-SNLI, and for all test instances in ComVE. There are, again, a substantial number of unfaithful NLEs: up to 14% for e-SNLI,
and up to 40% for ComVE. An example is in Table 1, row 2. More examples can be found in Table 6. We also notice that this test identified considerably more unfaithful NLEs for ComVE than for e-SNLI, while for our first test, the gap was not as pronounced. This shows the utility of developing diverse faithfulness tests.
Finally, all four types of models had similar faithfulness results5 on all datasets and tests, with no consistent ranking among them. This opposes the intuition that some configurations may be more faithful than others, e.g., Camburu et al. (2018) hypothesized that ST-Re may be more faithful than MT-Re, which is the case in most but not all of the cases, e.g., on CoS-E the editorial finds more unfaithfulness for ST-Re (44.04%) than for MT-Re
(42.76 %). We also observe that Re models tend to be less faithful than Ra models in most cases.
## 4 Related Work
Tests for Saliency Maps. The faithfulness and, more generally, the utility of explanations were predominantly explored for saliency maps. Comprehensiveness and sufficiency (DeYoung et al., 2020)
were proposed for evaluating the faithfulness of existing saliency maps. They measure the decrease in a model's performance when only the most or the least important tokens are removed from the input. Madsen et al. (2022) propose another faithfulness metric for saliency maps, ROAR, obtained by masking allegedly important tokens and then retraining the model. In addition, Yin et al. (2022)
and Hsieh et al. (2021) evaluate saliency maps through adversarial input manipulations presuming that model predictions should be more sensitive to manipulations of the more important input regions as per the saliency map. Chan et al. (2022b)
provide a comparative study of faithfulness measures for saliency maps. Further faithfulness testing for saliency maps was introduced by Camburu et al. (2019). Existing studies also pointed out that saliency maps can be manipulated to hide a classifier's biases towards dataset properties such as gender and race (Dombrowski et al., 2019; Slack et al., 2020; Anders et al., 2020). While diagnostic methods for saliency maps rely on the one-to-one correspondence between the saliency scores and the regions of the input, this correspondence is not present for NLEs, where text not in the input can be included. Thus, diagnostic methods for saliency maps are not directly applicable to NLEs. To this end, we propose diagnostic tests that can be used to evaluate NLE model faithfulness.
Tests for NLEs. Existing work often only looks at the plausibility of the NLEs (Rajani et al., 2019; Kayser et al., 2021; Marasovic et al. ´ , 2022; Narang et al., 2020; Kayser et al., 2022; Yordanov et al.,
2022). In addition, Sun et al. (2022) investigated whether the additional context available in humanand model-generated NLEs can benefit model prediction as they benefit human users. Differently, Hase et al. (2020) proposed to measure the utility of NLEs in terms of how well an observer can simulate a model's output given the generated NLE. The observer could be an agent (Chan et al., 2022a) or a human (Jolly et al., 2022; Atanasova et al., 2020b).
The only work we are aware of that introduces sanity tests for the faithfulness of NLEs is that of Wiegreffe et al. (2021), who suggest that an association between labels and NLEs is necessary for faithful NLEs and propose two pass/fail tests: (1)
whether the predicted label and generated NLE are similarly robust to noise, (2) whether task prediction and NLE generation share the most important input tokens for each. Majumder et al. (2022) use these tests as a sanity check for the faithfulness of their model. Our tests are complementary and offer quantitative metrics.
## 5 Summary And Outlook
In this work, we introduced two tests to evaluate the faithfulness of NLE models. We find that all four high-level setups of NLE models are prone to generate unfaithful NLEs, reinforcing the need for proof of faithfulness. Our tests can be used to ensure the faithfulness of emerging NLE models and inspire the community to design complementary faithfulness tests.
5Task accuracy and NLE quality are given in Table 7.
While our tests are an important stepping stone for evaluating the faithfulness of NLEs, they are not comprehensive. Hence, a model that would perform perfectly on our tests may still generate unfaithful NLEs.
Our first test inspects whether NLE models are faithful to reasons for counterfactual predictions. It is important to highlight that NLEs may not comprehensively capture all the underlying reasons for a model's prediction. Thus, an NLE that fails to accurately represent the reasons for counterfactual predictions may still offer faithful explanations by reflecting other relevant factors contributing to the predictions. Additionally, both the random baseline and the counterfactual editor can generate insertions that result in text lacking semantic coherence.
To address this limitation, future research can explore methods to generate insertion candidates that are both semantically coherent and reveal unfaithful NLEs.
Our second test uses heuristics that are taskdependent and may not be applicable to any task.
The reconstruction functions Rs proposed in this work are based on hand-crafted rules for the eSNLI and ComVE datasets. However, due to the nature of the CoS-E NLEs, rule-based input reconstructions were not possible for this dataset. To address this limitation, future research could investigate automated reconstruction functions that utilize machine learning models. These models would be trained to generate reconstructed inputs based on the generated NLEs, where a small number of annotations would be provided as training instances. For example, for CoS-E, one such training annotation could be: *Original Question:* After getting drunk people couldn't understand him, it was because of his what? *Choices:* lower standards, slurred speech, or falling down. *Answer:*
slurred speech. *NLE:* People who are drunk have difficulty speaking. → *Reconstructed Question:*
What do drunk people have difficulty with? *Reconstructed Choices:* lower standards, speaking, or falling down. This approach would enable the development of machine learning models capable of generating reconstructed inputs for various datasets.
## Acknowledgements
The research documented in this paper has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. Isabelle Augenstein's research is further partially funded by a DFF Sapere Aude research leader grant under grant agreement No 0171-00034B, as well as by the Pioneer Centre for AI, DNRF grant number P1. Thomas Lukasiewicz was supported by the Alan Turing Institute under the UK EPSRC grant EP/N510129/1, the AXA Research Fund, and the EU TAILOR grant 952215.
Oana-Maria Camburu was supported by a UK
Leverhulme Early Career Fellowship. Christina Lioma's research is partially funded by the Villum and Velux Foundations Algorithms, Data and Democracy (ADD) grant.
## References
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018.
Sanity checks for saliency maps. *Advances in Neural Information Processing Systems*, 31.
Julius Adebayo, Michael Muelly, Harold Abelson, and Been Kim. 2022. Post hoc explanations may be ineffective for detecting unknown spurious correlation.
In *International Conference on Learning Representations*.
Christopher Anders, Plamen Pasliev, Ann-Kathrin Dombrowski, Klaus-Robert Müller, and Pan Kessel.
2020. Fairwashing explanations with off-manifold detergent. In International Conference on Machine Learning, pages 314–323. PMLR.
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020a. A diagnostic study of explainability techniques for text classification. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 3256–3274, Online. Association for Computational Linguistics.
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020b. Generating fact checking explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7352–7364, Online. Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages
632–642, Lisbon, Portugal. Association for Computational Linguistics.
Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, and Phil Blunsom. 2019. Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods. In *NeurIPS 2019 Workshop* Safety and Robustness in Decision Making.
Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, and Phil Blunsom. 2021. The struggles of feature-based explanations: Shapley values vs. minimal sufficient subsets. In AAAI 2021 Workshop on Explainable Agency in Artificial Intelligence.
Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural Language Inference with Natural Language Explanations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 9539–9549. Curran Associates, Inc.
Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, and Phil Blunsom. 2020. Make up your mind! adversarial generation of inconsistent natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4157–
4165, Online. Association for Computational Linguistics.
Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, and Xiang Ren. 2022a. Frame: Evaluating simulatability metrics for free-text rationales. arXiv preprint arXiv:2207.00779.
Chun Sik Chan, Huanqi Kong, and Liang Guanqing.
2022b. A comparative study of faithfulness metrics for model interpretability methods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 5029–5038, Dublin, Ireland. Association for Computational Linguistics.
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443–4458, Online. Association for Computational Linguistics.
Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, and Pan Kessel. 2019. Explanations can be manipulated and geometry is to blame. In *Advances in Neural Information Processing Systems*,
volume 32. Curran Associates, Inc.
Christiane Fellbaum. 2010. Wordnet. In Theory and Applications of Ontology: Computer Applications, pages 231–243. Springer.
Riccardo Guidotti. 2022. Counterfactual explanations and how to find them: literature review and benchmarking. *Data Mining and Knowledge Discovery*,
pages 1–55.
Leif Hancox-Li. 2020. Robustness in machine learning explanations: does it matter? In *Proceedings of the* 2020 Conference on Fairness, Accountability, and Transparency, pages 640–647.
Peter Hase, Shiyue Zhang, Harry Xie, and Mohit Bansal. 2020. Leakage-adjusted simulatability: Can models generate non-trivial explanations of their behavior in natural language? In *Findings of the Association for Computational Linguistics: EMNLP*
2020, pages 4351–4367, Online. Association for Computational Linguistics.
Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy:
Industrial-strength Natural Language Processing in Python.
Cheng-Yu Hsieh, Chih-Kuan Yeh, Xuanqing Liu, Pradeep Kumar Ravikumar, Seungyeon Kim, Sanjiv Kumar, and Cho-Jui Hsieh. 2021. Evaluations and methods for explanation through robustness analysis.
In *International Conference on Learning Representations*.
Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, and Yoav Goldberg. 2021.
Contrastive explanations for model interpretability.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 1597–1611, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Shailza Jolly, Pepa Atanasova, and Isabelle Augenstein.
2022. Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing. *Information*,
13(10).
Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, and Thomas Lukasiewicz. 2021. e-ViL: A
dataset and benchmark for natural language explanations in vision-language tasks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1244–1254.
Maxime Kayser, Cornelius Emde, Oana-Maria Camburu, Guy Parsons, Bartlomiej Papiez, and Thomas Lukasiewicz. 2022. Explaining chest x-ray pathologies in natural language. In *Medical Image Computing and Computer Assisted Intervention - MICCAI 2022*, pages 701–713, Cham. Springer Nature Switzerland.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. *arXiv preprint* arXiv:1412.6980.
Andreas Madsen, Nicholas Meade, Vaibhav Adlakha, and Siva Reddy. 2022. Evaluating the Faithfulness of Importance Measures in NLP by Recursively
Masking Allegedly Important Tokens and Retraining. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 1731–1751, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Bodhisattwa Prasad Majumder, Oana Camburu, Thomas Lukasiewicz, and Julian Mcauley. 2022.
Knowledge-grounded self-rationalization via extractive and natural language explanations. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 14786–14801. PMLR.
Ana Marasovic, Iz Beltagy, Doug Downey, and ´
Matthew E. Peters. 2022. Few-shot selfrationalization with natural language prompts.
Findings of NAACL.
Tim Miller. 2019. Explanation in artificial intelligence:
Insights from the social sciences. *Artificial intelligence*, 267:1–38.
Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020.
WT5?! training text-to-text models to explain their predictions.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself!
leveraging language models for commonsense reasoning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
Alexis Ross, Ana Marasovic, and Matthew Peters. ´
2021. Explaining NLP models via minimal contrastive editing (MiCE). In Findings of the Association for Computational Linguistics: ACL-IJCNLP
2021, pages 3840–3852, Online. Association for Computational Linguistics.
Dylan Slack, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. 2021. Counterfactual explanations can be manipulated. *Advances in Neural Information Processing Systems*, 34:62–75.
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In *Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society*, AIES '20, page 180–186, New York, NY, USA. Association for Computing Machinery.
Jiao Sun, Swabha Swayamdipta, Jonathan May, and Xuezhe Ma. 2022. Investigating the benefits of freeform rationales. In Findings of the Association for
Computational Linguistics: EMNLP 2022, pages 5867–5882, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, and Yue Zhang. 2020.
SemEval-2020 task 4: Commonsense validation and explanation. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 307–321, Barcelona (online). International Committee for Computational Linguistics.
Sarah Wiegreffe and Ana Marasovic. 2021. Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
Sarah Wiegreffe, Ana Marasovic, and Noah A. Smith. ´
2021. Measuring association between labels and free-text rationales. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10266–10284, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2021. Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 6707–6723, Online. Association for Computational Linguistics.
Fan Yin, Zhouxing Shi, Cho-Jui Hsieh, and Kai-Wei Chang. 2022. On the Sensitivity and Stability of Model Interpretations in NLP. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2631–2647, Dublin, Ireland. Association for Computational Linguistics.
Yordan Yordanov, Vid Kocijan, Thomas Lukasiewicz, and Oana-Maria Camburu. 2022. Few-Shot Out-ofDomain Transfer of Natural Language Explanations.
In *Proceedings of the Findings of the Conference on* Empirical Methods in Natural Language Processing
(EMNLP).
Mo Yu, Shiyu Chang, Yang Zhang, and Tommi Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4094–4103, Hong Kong, China. Association for Computational Linguistics.
## A More Examples Of Unfaithful Nles
| Dataset | Original Instance | Instance After Test Intervention |
|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| CoS-E | Question: What happens when spending money without paying someone back? Choices: poverty, debt, and bankruptcy Prediction: debt NLE: debt is the only option that is not something that can be paid back. | ➜Question: What happens when increasingly spending money without paying someone back? Choices : poverty, debt, and bankruptcy. Prediction: bankruptcy ✗ NLE: bankruptcy is the only option that is a result of spending money. Unfaithfulness cause: inserted word 'increasingly' ∈/ NLE but changed the prediction. |
| ComVE | Sent 1: Everyone hates paying taxes Sent 2: Nobody hates paying taxes Prediction: first sentence NLE: Paying taxes is a good thing | Sent 1: Everyone hates paying taxes ➜Sent 2 Nobody ardently hates paying taxes Prediction: second sentence ✗ NLE: Paying taxes is a good thing Unfaithfulness cause: inserted word 'ardently' ∈/ NLE but changed the prediction. |
| e-SNLI | Premise: A man wearing glasses and a ragged costume is playing a Jaguar electric guitar and singing with the accompaniment of a drummer. Hypothesis: A man with glasses and a disheveled outfit is playing a guitar and singing along with a drummer. Prediction: entailment NLE: A ragged costume is a disheveled outfit. | Premise: A man wearing glasses and a ragged costume is playing a Jaguar electric guitar and singing with the accompaniment of a drummer. ➜Hypothesis: A man with glasses and a disheveled outfit is playing a guitar and singing along with a semi-formal drummer. Prediction: neutral ✗ NLE: Not all ragged costumes are disheveled. Unfaithfulness cause: inserted word 'semi-formal' ∈/ NLE but changed the prediction. |
Table 4: Examples of unfaithful explanations detected with **random insertion baseline**. (see §2). The examples are selected for the MT-RA models for all three datasets. We apply the tests on an original instance (second column), which results in a new instance (third column). The parts of the input changed by the test are marked with ➜, and the intervention made by the test is in blue. ✗ marks an NLE or a prediction that does not match the expectation, thus pointing to the underlined NLE being unfaithful.
| Dataset | Original Instance | Instance After Test Intervention |
|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| CoS-E | Question: Where can books be read? Choices: shelf, table, and backpack Prediction: table NLE: books are usually read on a table. | ➜Question: Where outside can books be read? Choices : shelf, table, and backpack. Prediction: backpack ✗ NLE: books are usually stored in backpacks. Unfaithfulness cause: inserted word 'outside' ∈/ NLE but changed the prediction. |
| ComVE | Sent 1: When people are hungry they drink water and do not eat food. Sent 2: People eat food when they are hungry. Prediction: first sentence NLE: Water is not a food and cannot satisfy people's hunger. | Sent 1: When people are hungry they drink water and do not eat food. ➜Sent 2 People eat food so many times when they are hungry. Prediction: second sentence ✗ NLE: Eating food is not a good way to get rid of hunger. Unfaithfulness cause: inserted words 'so many times' ∈/ NLE but changed the prediction. |
| e-SNLI | Premise: Two women having drinks at the bar. Hypothesis: Three women are at a bar. Prediction: contradiction NLE: Two women are not three women. | Premise: Two women having drinks at the bar. ➜Hypothesis: Three women are together at a bar. Prediction: entailment ✗ NLE: Two women are three women. Unfaithfulness cause: inserted word 'together' ∈/ NLE but changed the prediction. |
Table 5: Examples of unfaithful explanations detected with **counterfactual editor**. (see §2). The examples are selected for the MT-RA models for all three datasets. We apply the tests on an original instance (second column),
which results in a new instance (third column). The parts of the input changed by the test are marked with ➜, and the intervention made by the test is in blue. ✗ marks an NLE or a prediction that does not match the expectation, thus pointing to the underlined NLE being unfaithful.
## A.1 Model Performance
| Model | Original Instance | Instance After Test Intervention |
|-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Dataset, ComVE, ST-RE | Sent 1: Crack addicts are addicted to chocolate milk. Sent 2: Crack addicts are addicted to crack. Prediction: first sentence Explanation: Chocolate milk is not addictive. | ➜Sent 1: Crack addicts are addicted to chocolate milk. ➜Sent 2: Chocolate milk is not addictive. ✗ Prediction: second sentence Explanation: Chocolate milk contains a lot of addictive chemicals. |
| ComVE, ST-RA | Sent 1: He visited a doctor to cure his sickness Sent 2: He went to a lawyer to cure his sickness Prediction: second sentence Explanation: Lawyers do not treat people. | ➜Sent 1: Lawyers do not treat people. ➜Sent 2: He went to a lawyer to cure his sickness ✗ Prediction: first sentence Explanation: Lawyers treat people |
| ComVE, MT-RE | Sent 1: Giraffes have long necks. Sent 2: Monkeys have long necks. Prediction: second sentence Explanation: Monkeys have short necks. | ➜Sent 1: Monkeys have short necks. ➜Sent 2: Monkeys have long necks. ✗ Prediction: first sentence Explanation: Monkeys have long necks. |
| MT-RA | Sent 1: My knee was scrapped and I put ointment on it. | |
| ComVE, | Sent 2: My knee was scrapped and I put dirt on it. Prediction: first sentence Explanation: Ointment is not used to scrape a knee. | ➜Sent 1: My knee was scrapped and I put ointment on it. ➜Sent 2: Ointment is not used to scrape a knee. ✗ Prediction: second sentence Explanation: Ointment is used to scrape a knee. |
| e-SNLI, ST-RE | Premise: People are riding bicycles in the street, and they are all wearing helmets. Hypothesis: A group of friends are grabbing their bikes, getting ready for the morning bike ride. Prediction: contradiction Explanation: Just because people are riding bicycles does not mean they are friends. | ➜Premise: People are riding bicycles. ➜Hypothesis: They are friends. ✗ Prediction: neutral Explanation: People riding bicycles are not necessarily friends. |
| ST-RA | Premise: A woman is walking her dog and using her cellphone. | |
| e-SNLI, | Hypothesis: The woman is playing a game on her cellphone. Prediction: neutral Explanation: Just because a woman is using her cellphone does not mean she is playing a game. | ➜Premise: A woman is using her cellphone. ➜Hypothesis: She is playing a game. ✗ Prediction: contradiction Explanation: The woman can not be using her cellphone and playing a game at the same time. |
| e-SNLI, MT-RE | Premise: A person wearing a red shirt is falling off a white surfboard. Hypothesis: A surfer is being flung in to the waves after hiting a large crest. Prediction: neutral Explanation: Just because a man is confused doesn't mean he doesn't know where he is. | ➜Premise: A person falling off a surfboard. ➜Hypothesis: The surfer is being flung in to the waves. ✗ Prediction: entailment Explanation: A person falling off a surfboard is being flung in to the waves. |
| e-SNLI, MT-RA | Premise: Many people standing outside of a place talking to each other in front of a building that has a sign that says 'HI-POINTE.' Hypothesis: The people are having a chat before going into the work building. Prediction: neutral Explanation: Just because people are talking does not mean they are having a chat. | ➜Premise: People are talking. ➜Hypothesis:They are having a chat. ✗ Prediction: entailment Explanation: People are talking is a rephrasing of they are having a chat. |
Model Acc↑ **BLEU**↑
SNLI
MT-Re 88.24 20.01 ST-Re 87.68 19.67 MT-Ra 88.10 20.67 ST-Ra 87.63 20.59
CoS-E
MT-Re 65.79 5.75 ST-Re 66.11 6.66
MT-Ra 66.95 5.55 ST-Ra 67.79 7.85
ComVE
MT-Re 85.70 7.53 ST-Re 84.40 6.68 MT-Ra 86.40 7.03 ST-Ra 86.40 7.21
Table 7: Performance of the models described in Eq 3. Acc denotes the prediction performance of the model on the corresponding task. BLEU denotes the BLEU score of the generated explanation compared to the gold human ones.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations Section
✓ A2. Did you discuss any potential risks of your work?
Limitations Section (the risk of our tests being seen as comprehensive has been addressed)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and 1. Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Not applicable. We only used existing datasets that are not specifically created for artifacts
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 3. Experiments
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3. Experiments
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
just a single run, sections 2 and 3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Sections 2 and 3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Sections 2 and 3
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
The annotation was done by an author D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. The annotation was done by an author
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. We did not collect data, the annotations were done for evaluation of our methods only
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
zandie-etal-2023-cogen | {COGEN}: Abductive Commonsense Language Generation | https://aclanthology.org/2023.acl-short.26 | Reasoning is one of the most important elements in achieving Artificial General Intelligence (AGI), specifically when it comes to Abductive and counterfactual reasoning. In order to introduce these capabilities of reasoning in Natural Language Processing (NLP) models, there have been recent advances towards training NLP models to better perform on two main tasks - Abductive Natural Language Inference (alphaNLI) and Abductive Natural Language Generation Task (alphaNLG). This paper proposes CoGen, a model for both alphaNLI and alphaNLG tasks that employ a novel approach of combining the temporal commonsense reasoning for each observation (before and after a real hypothesis) from pre-trained models with contextual filtering for training. Additionally, we use state-of-the-art semantic entailment to filter out the contradictory hypothesis during the inference. Our experimental results show that CoGen outperforms current models and set a new state of the art in regards to alphaNLI and alphaNLG tasks. We make the source code of CoGen model publicly available for reproducibility and to facilitate relevant future research. | # Cogen**: Abductive Commonsense Language Generation**
Rohola Zandie1, Diwanshu Shekhar2**, Mohammad H. Mahoor**1 1Department of Electrical and Computer Engineering 2Department of Computer Science University of Denver Denver, USA
[email protected] [email protected] [email protected]
## Abstract
Reasoning is one of the most important elements in achieving Artificial General Intelligence (AGI), specifically when it comes to Abductive and counterfactual reasoning. In order to introduce these capabilities of reasoning in Natural Language Processing (NLP) models, there have been recent advances toward training NLP models to better perform on two main tasks - Abductive Natural Language Inference
(αNLI) and Abductive Natural Language Generation Task (αNLG). This paper proposes COGEN, a model for both αNLI and αNLG tasks that employs a novel approach of combining the temporal commonsense reasoning for each observation (before and after a real hypothesis)
from pre-trained models with entailment-based filtering for training. Additionally, we use stateof-the-art semantic entailment to filter out the contradictory hypothesis during the inference.
Our experimental results show that COGEN
outperforms current models and set a new state of the art in regards to αNLI and αNLG tasks.
We make the source code of the COGEN model publicly available for reproducibility and to facilitate relevant future research.
## 1 Introduction
Different kinds of reasoning can be categorized into three classes (Walton, 2014): Deduction, Induction, and Abduction. In deduction, the truth of the conclusion is already provided in the premise, therefore, it is impossible that the premises are true and the conclusion is false. Induction is the process of going from the truth of some premises to the conclusion. Finally, abduction is the process of forming the most plausible hypothesis based on incomplete observations. The focus of this paper is on abductive reasoning.
The abductive inference could be viewed as going backward from the conclusions of a valid deductive inference to the premises to find its plausible causes and effects. In terms of classical logic, this is a fallacy (Andersen, 1973). Abductive reasoning is defeasible (and also non-monotonic) which means the conclusions can be refuted in the light of new data. Although abductive reasoning forms one of the core abilities of human cognition, its research in the area of NLP is still widely unexplored.
Recent work on large language models like GPT3 (Brown et al., 2020) and GPT-Neo (Gao et al.,
2020) had impressive results on different NLP
tasks but still struggled with Abductive Natural Language Inference (αNLI ) tasks. These models embed a great deal of world knowledge (Petroni et al., 2019; Wang et al., 2020), but their potential for commonsense reasoning (e.g. abductive reasoning) has not been fully harnessed. The task of abductive commonsense language generation can be defined as generating reasons given incomplete observations.
Abductive commonsense language generation can be formulated as a controlled language generation task. Like other controllable language generation problems that involve maintaining fluency and relevance of the generated text conditioned on some property, such as sentiment (Lample et al.,
2018), topic (Zandie and Mahoor, 2021), and style
(Shen et al., 2017), the abductive commonsense language generation can be viewed as a controllable language generation task that is conditioned on incomplete observations.
In this paper, we introduce COGEN1, a model for generating and inferring abductive reasons that are compatible with observations. This combines temporal commonsense reasoning for each observation (before and after the hypothesis) from pretrained models with contextual filtering for training.
Contextual filtering refers to the technique of refining temporal entailment during text generation to produce more coherent and contextually relevant output. We also use state-of-the-art semantic entail-1Codes and Data are publicly available at: https://
github.com/roholazandie/abduction_modeling 295
![1_image_0.png](1_image_0.png)
ment to filter out contradictory hypotheses during the inference. Our results show that COGEN outperforms all previous models regarding αNLI and αNLG tasks.
Our main contributions are the following:
1. Using temporal commonsense reasoning for augmenting the observations - a crucial step in the abductive hypothesis generation as this task requires understanding the temporal relationships such as causes, effects, reasons, and intents.
2. Using contextual filtering to help narrow down the space of generated commonsense reasoning to the ones that are relevant to both observations.
3. Using the semantic entailment filtering to rule out the possibility of generating contradictory hypotheses given both observations.
4. Releasing the source code of the COGEN
model for reproducibility and assisting relevant future research.
## 2 Related Work
Previous research on reasoning in NLP mainly focuses on monotonic reasoning, which is usually about finding the "entailment", "contradiction" or
"neutral" relationships between a premise and a hypothesis. For example, SNLI (Bowman et al.,
2015) and MultiNLI (Williams et al., 2018) are both datasets that focus on monotonic inference.
There is a choice of plausible reasoning task with the COPA dataset (Roemmele et al., 2011) which is designed for causal reasoning.
In (Qin et al., 2019), the authors introduced the TimeTravel dataset which contains over 28k counterfactual instances. The results show the current language models lack understanding of the reasoning behind the stories, sometimes even adding more samples will not improve the quality of the generation. (Qin et al., 2020) proposes Delorean, a new unsupervised decoding algorithm based on backpropagation that incorporates observations from the past and future to generate constrained text in between. They used the ART dataset (Bhagavatula et al., 2019) which contains 20k samples.
The most relevant work to COGEN is Abductive Commonsense reasoning (COMeTEmb+GPT2)
(Bhagavatula et al., 2019), which introduces ART
dataset consisting of 20k commonsense narrative contexts with 200k explanations. They also introduced two tasks: abductive NLI (αNLI) a multiplechoice task for choosing the best hypothesis and abductive NLG (αNLG) which generates an abductive hypothesis given the two before and after contextual observations. Results showed that abductive NLG is much more challenging compared to (αNLI) and needs further research. They also used GPT-2 and COMET (Bosselut et al., 2019)
for commonsense reasoning to generate new abductive hypotheses. The human judgment results show that only 44.56 percent of these generated hypotheses make sense to evaluators. In (Paul and Frank, 2021), they consider possible events emerging from the candidate hypothesis and then select the one that is most similar to the observed outcome.
Their approach outperforms COMeTEmb+GPT2 on the αNLI task and achieves 72.2 on the test set. (Ji et al., 2020) proposed GRF, which is based on GPT-2 and dynamic multi-hop reasoning for multi-relational paths extracted from ConceptNet for αNLG.
REFLECTIVE DECODING (West et al., 2020)
is an unsupervised text generation algorithm for text infilling that uses two pre-trained forward and backward language models. This algorithm outperforms all unsupervised methods, but is still significantly behind the fine-tuned model of COMeTEmb+GPT2 in abductive generation.
## 3 Method
Abductive reasoning can be formulated using a single observation as a premise and generating a hypothesis. However, following (Qin et al., 2020)
we formulate abductive commonsense language generation as the task of generating a hypothesis H given two observations, O1 and O2 that happen at times t1 and t2, respectively, in which t2 > t1.
The hypothesis H happens between t1 and t2.
This shows abductive and temporal reasoning is closely related to each other (Verdoolaege et al.,
2000). More specifically, abductive reasoning requires temporal reasoning about the consequences of events (what typically occurs after them) and the reasons behind them (what may happen prior to or trigger them).
Commonsense knowledge graphs (CSKB) are knowledge graphs containing many commonsense facts about the world that help to understanding and reasoning about events, social interactions and physical entities. ATOMIC2020 (Hwang et al.,
2020) is the largest CSKB having 1.33M tuples about entities and events of inferential knowledge and introduces 23 relation types. In this paper we focus on two classes of these relations: *before* relations and *after* relations. *before* relations are those that take place before the observation or trigger them, such as: isBefore, Causes, xEffect, xReacts, xIntents, and xWants. *after* relations are those that occur after the observation, such as: isAfter, oReact, oWant, oEffect, xReason.
Neural Knowledge Graphs are models trained on CSKB tuples and are able to generate tails given the new heads. For instance, to predict the tail of the tuple (X votes for Y, xIntents ?) is to generate "to give support". We use the state-of-the-art pretrained Bidirectional and Autoregressive Transformer (BART) (Lewis et al., 2020) named Comet that is trained on ATOMIC2020.
For temporal commonsense augmentation, we generate n *after* relation facts for O1 and n *before* relation facts for O2. The Comet(*O, R*) is the function that generates the commonsense for observation O for the relation R. If RA and RB are the *after* and *before* relations, then the following commonsense responses are generated:
$$C^{A}=C o m e t(O_{1},R_{A})$$ $$C^{B}=C o m e t(O_{2},R_{B})$$
$$(1)$$
$${\mathrm{(2)}}$$
However, not all *after* and *before* relations are relevant for every situation. The generated commonsense facts should be filtered out based on the context. For each commonsense relation, we chose the most likely fact based on the semantic similarity to the other observation. More specifically, the most likely after (*before*) fact for O1 (O2) based on the similarity to O2 (O1) is chosen:
$$c^{A}=\operatorname*{argmax}_{c_{i}}S i m(O_{2},C_{i}^{A})$$
$$({\mathfrak{I}})$$
i) (3)
$$c^{B}=\operatorname*{argmax}_{c_{i}}S i m(O_{1},C_{i}^{B})$$
$$\left(4\right)$$
i) (4)
where Sim is the cross-encoder (Reimers and Gurevych, 2019) based on BERT that calculates the similarity of two input texts. Figure 1 shows the pipeline for temporal commonsense generation and contextual filtering. This is similar to how we consider possible conclusions from the observations. We try to limit these based on how well they correspond to other observations in hand (Paul and Frank, 2021).
Given the observations O1 = {t O1 1*. . . t*O1 m },
O2 = {t O2 1*. . . t*O2 n } and hypothesis H =
{t H
1
. . . tH
l} as a sequence of tokens, we can augment the input with the commonsense knowledge from the previous step K = {c A, cB} as a sequence of tokens K = {t K 1
. . . tK
q }. The Abductive Commonsense Language Generation can be formulated by minimizing the following negative log-likelihood:
$${\mathcal{L}}=-\sum_{i=1}^{N}\log P(t_{i}^{H}\mid t_{<i}^{H},O_{1},O_{2},K)\quad(5)$$
Training: We trained three different models for αNLG - COGENLG, COGENMD and COGENSM
by fine-tuning three GPT-2 models of sizes large, medium, and small, respectively. We used an embedding size of 512 for all models with a maximum token size of 128. The learning rate was set to 5e−4 with a weight decay of 0.01. We stopped training after 5 epochs before overfitting to the training set occurrs.
We also propose the fine-tuned COGENRB
model for αNLI, which is based on the large ROBERTA (Liu et al., 2019) model. We set the
| Model | BERT-Score | BLEURT | BLEU | TER | METEOR | ROUGE | Human |
|----------------|--------------|----------|--------|--------|----------|---------|---------|
| COMeT-Emb+GPT2 | 88.25 | -1.07 | 3.22 | 106.31 | 9.74 | 17.42 | 44.56 |
| COGENLG | 88.74 | -1.12 | 28.80 | 123.47 | 21.62 | 26.75 | 52.00 |
| COGENMD | 89.75 | -0.83 | 37.15 | 104.19 | 22.56 | 30.58 | 69.2 |
| COGENSM | 88.14 | -0.99 | 10.25 | 103.40 | 11.50 | 20.62 | 43.2 |
Table 1: The automatic evaluations of generative models on the *test* set of ART Dataset (Bhagavatula et al., 2019)
first 20% for the warm-up with the learning rate of 1e − 5 and after that decrease it linearly by a ratio of 0.01.
Inference: For inference, we use beam search decoding with a beam size of 5. We chose this search as it works best with controllable language generation (Zandie and Mahoor, 2021). For each pair of observations, multiple hypotheses are generated and then filtered out based on entailment. We use the pre-trained semantic entailment BERT cross-encoder (Reimers and Gurevych, 2019), trained on SNLI (Bowman et al., 2015)
and MultiNLI (Williams et al., 2018), to filter out each generated hypothesis H, if the O1 → H or H → O2 is a contradiction. Using this technique we can remove undesired hypotheses that are incompatible with the given observations.
## 4 Result
We report BERT-Score (Zhang* et al., 2020),
BLEURT (Sellam et al., 2020), BLEU (Papineni et al., 2002), TER (Snover et al., 2006), METEOR
(Banerjee and Lavie, 2005) and ROUGE (Lin, 2004) for automatic evaluation of our model. The results in Table 2 show that both COGENMD and COGENLG outperform the best model in (Bhagavatula et al., 2019), which is COMeT-Emb+GPT2 model on all metrics on the *test* set of the ART
dataset (Bhagavatula et al., 2019). Additionally, COGENMD performs the best among all the models.
We assessed human evaluations on 100 randomly selected results from the *test* set. The evaluation was completed by five graduate students unrelated to our research, providing us with unbiased data.
These evaluations, shown in Table 2, are consistent with previous automatic results. These results show that COGENMD generates better results compared to the base model (COMeT-Emb+GPT2) and the Real Hypothesis in most cases. Also, COGENLG
outperforms the base model.
Finally, we show the results of αNLI task from different models in Table 3. This table displays
| Model | < | Neutral | < | Comparator |
|---------|-------|-----------|-------|--------------|
| COGENLG | 48.00 | 22.20 | 29.80 | RH |
| COGENLG | 37.00 | 17.00 | 46.00 | CM |
| COGENMD | 30.60 | 32.80 | 36.40 | RH |
| COGENMD | 23.80 | 23.40 | 52.40 | CM |
| COGENSM | 56.60 | 24.20 | 19.00 | RH |
| COGENSM | 42.00 | 32.20 | 25.80 | CM |
| Model | Dev Acc (%) | Test Acc (%) |
|----------------|---------------|----------------|
| ESIM+ELMo | 58.20 | 58.80 |
| BERTLarge | 69.10 | 68.90 |
| COMeT-Emb+GPT2 | 69.40 | 69.10 |
| LMI + MTL | 72.90 | 72.20 |
| COGENRB | 82.90 | 83.26 |
Table 3: Results on αNLI task. Last row in bold shows the performance of COGENRB based on ROBERTA
that COGENRB surpasses the previous model used
(LMI + MTL) (Paul and Frank, 2021) by a substantial margin. The results of αNLI show the importance of temporal reasoning and contextual filtering along with ROBERTA.
## 5 Conclusion
We present COGEN, a novel approach to generate abductive reasoning given incomplete observations in three different sizes. This integrates temporal reasoning, context filtering, and semantic entailment to complete the base GPT-2 model for better reasoning. Both human and automatic evaluations assessed in this study show that COGEN outperforms previous methods used for abductive reasoning. Our approach sets a new state-of-the-art for αNLI and αNLG tasks on ART dataset.
## Limitations
The CoGEN model introduced in this paper uses temporal relations as a process of abductive reasoning. Although, temporal relations have been shown to be very useful in abductive reasoning
(Verdoolaege et al., 2000), the measure of the effectiveness of other types of relations about an observation have not been evaluated in this paper. In addition, because of the unavailability of a large number of human evaluators, we randomly selected 100 selected results as opposed to the entire result which would have been ideal.
## References
Henning Andersen. 1973. Abductive and deductive change. *Language*, pages 765–793.
Satanjeev Banerjee and Alon Lavie. 2005. METEOR:
An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics.
Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. *arXiv preprint arXiv:1908.05739*.
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi.
2019. COMET: Commonsense transformers for automatic knowledge graph construction. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. *arXiv* preprint arXiv:1508.05326.
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in Neural Information Processing* Systems 33 (NeurIPS 2020).
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020.
The pile: An 800gb dataset of diverse text for language modeling. *arXiv preprint arXiv:2101.00027*.
Jena D Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2020. Comet-atomic 2020: On symbolic and neural commonsense knowledge graphs. arXiv preprint arXiv:2010.05953.
Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, and Minlie Huang. 2020. Language generation with multi-hop reasoning on commonsense knowledge graph. *arXiv preprint arXiv:2009.11692*.
Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. 2018. Multiple-attribute text rewriting. In International Conference on Learning Representations.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. pages 311–318.
Debjit Paul and Anette Frank. 2021. Generating hypothetical events for abductive inference. arXiv preprint arXiv:2106.03973.
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? *arXiv preprint arXiv:1909.01066*.
Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and generation. arXiv preprint arXiv:1909.04076.
Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena Hwang, Ronan Le Bras, Antoine Bosselut, and Yejin Choi. 2020. Back to the future:
Unsupervised backprop-based decoding for counterfactual and abductive commonsense reasoning. arXiv preprint arXiv:2010.05906.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
arXiv preprint arXiv:1908.10084.
Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *2011 AAAI Spring Symposium Series*.
Thibault Sellam, Dipanjan Das, and Ankur P. Parikh.
2020. Bleurt: Learning robust metrics for text generation. In ACL.
Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from nonparallel text by cross-alignment. *arXiv preprint* arXiv:1705.09655.
Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223–231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas.
Sven Verdoolaege, Marc Denecker, and Frank Van Eynde. 2000. Abductive reasoning with temporal information. *arXiv preprint cs/0011035*.
Douglas Walton. 2014. *Abductive reasoning*. University of Alabama Press.
Chenguang Wang, Xiao Liu, and Dawn Song. 2020.
Language models are open knowledge graphs. *arXiv* preprint arXiv:2010.11967.
Peter West, Ximing Lu, Ari Holtzman, Chandra Bhagavatula, Jena Hwang, and Yejin Choi. 2020. Reflective decoding: Beyond unidirectional generation with off-the-shelf language models. arXiv preprint arXiv:2010.08566.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long Papers), pages 1112–1122. Association for Computational Linguistics.
Rohola Zandie and Mohammad H Mahoor. 2021. Topical language generation using transformers. arXiv preprint arXiv:2103.06434.
Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✗ A2. Did you discuss any potential risks of your work?
Not relevant to this research
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 3
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Computational infrastructure used was pretty general, and nothing out of ordinary there to report The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not relevant for this research
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 4
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
The instructions given to annotators were straight-forward
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
We recruited student volunteers
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
The annotators were verbally explained that the data they were using were open-source data
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not relevant to this research
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not relevant to this research |
hu-etal-2023-multimodal | Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis | https://aclanthology.org/2023.acl-short.27 | Multimodal relation extraction (MRE) is the task of identifying the semantic relationships between two entities based on the context of the sentence image pair. Existing retrieval-augmented approaches mainly focused on modeling the retrieved textual knowledge, but this may not be able to accurately identify complex relations. To improve the prediction, this research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image. We further develop a novel approach to synthesize the object-level, image-level, and sentence-level information for better reasoning between the same and different modalities. Extensive experiments and analyses show that the proposed method is able to effectively select and compare evidence across modalities and significantly outperforms state-of-the-art models. | # Multimodal Relation Extraction With Cross-Modal Retrieval And Synthesis
Xuming Hu1, Zhijiang Guo2†, Zhiyang Teng3, Irwin King4**, Philip S. Yu**1,5 1Tsinghua University, 2University of Cambridge, 3Nanyang Technological University, 4The Chinese University of Hong Kong, 5University of Illinois at Chicago [email protected] [email protected] [email protected] [email protected] [email protected]
## Abstract
Multimodal relation extraction (MRE) is the task of identifying the semantic relationships between two entities based on the context of the sentence image pair. Existing retrievalaugmented approaches mainly focused on modeling the retrieved textual knowledge, but this may not be able to accurately identify complex relations. To improve the prediction, this research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image. We further develop a novel approach to synthesize the object-level, imagelevel, and sentence-level information for better reasoning between the same and different modalities. Extensive experiments and analyses show that the proposed method is able to effectively select and compare evidence across modalities and significantly outperforms stateof-the-art models. Code and data are available1.
## 1 Introduction
Relation extraction aims to detect relations among entities in the text and plays an important role in various applications (Zhang et al., 2017; Soares et al., 2019). Early efforts mainly focus on predicting the relations based on the information from one single modality i.e., text. Recently, multimodal relation extraction (MRE) has been proposed to enhance textual representations with the aid of visual clues from images (Zheng et al., 2021a; Chen et al., 2022; Wang et al., 2022). It extends the textbased approaches by providing visual contexts to address the common ambiguity issues in identifying relations. Figure 1 shows an example from the MNRE dataset (Zheng et al., 2021b). To infer the relation between entities *Ang Lee* and *Oscar*, the model needs to capture the interactions from visual relations between objects in an image to textual relations in a sentence. The visual relation "holding" 1https://github.com/THU-BPM/MRE
†Corresponding Author.
![0_image_0.png](0_image_0.png)
between two objects helps to detect the relation awarded between two textual entities.
Most existing efforts focus on modeling the visual and textual content of the input. Zheng et al.
(2021a) constructed the textual and visual graphs, then identify the relations based on graph alignments. Chen et al. (2022) presents a hierarchical visual prefix fusion network to incorporate hierarchical multi-scaled visual and textual features. Li et al. (2023a) proposes a fine-grained multimodal alignment approach with Transformer, which aligns visual and textual objects in representation space. Wang et al. (2022) first proposes retrieval-augmented multimodal relation extraction. The given image and sentence are used to retrieve textual evidence from the knowledge base constructed based on Wikipedia. Unlike previous retrieval-based models, we not only retrieve texts but also retrieve visual and textual evidence related to the object, sentence, and entire image. A novel strategy is used to combine evidence from the ob303
![1_image_0.png](1_image_0.png)
ject, sentence, and image levels in order to make better reasoning across modalities. Our key contributions are summarized as follows:
- We use cross-modal retrieval for obtaining multimodal evidence. To improve prediction accuracy, we further synthesize visual and textual information for relational reasoning.
- We evaluate our method on the MRE benchmark. Extensive experimental results validate the effectiveness of the proposed approach.
## 2 Methodology 2.1 Cross-Modal Retrieval
This module aims to retrieve visual evidence based on the input text (sentence, entities), and textual evidence based on the input image and objects.
Textual evidence We first obtain the local visual objects with top m salience by using the visual grounding toolkit (Yang et al., 2019): Vobj =
{V
1 obj , V 2 obj , · · · , V m obj}. Then we retrieve Vimg and Vobj using Google Vision APIs2to obtain textual evidence, which returns a list of entities E*entity* that describe the content of the Vimg and Vobj and provide a more effective explanation for the visual 2https://cloud.google.com/vision/docs/
detecting-web content. In addition to *Entity*, the APIs could return the images' URLs and the containing pages' URLs. We propose a web crawler to search the images' URLs in the containing pages' and then return the captions E*caption* if found. Note that E*entity* and E*caption* contain 10 entities and captions obtained for each Vimg and Vobj as retrieval textual evidence.
Visual Evidence We use the textual content T of the post to retrieve the visual evidence. More specially, we leverage the Google custom search API3 to retrieve the 10 images E*image* for the textual content in each post.
## 2.2 Cross-Modal Synthesis
Given the retrieved visual and textual evidence, this module aims to synthesize multimodal information for relation extraction.
## 2.2.1 Visual Encoder
The visual encoder module encodes the visual content Vimg, Vobj and retrieved visual evidence E*image* of the post. First, we adopt the ResNet (He et al., 2016) which is pretrained on the ImageNet dataset (Deng et al., 2009) to obtain the visual embedding hv ∈ R
n×d, where n and d represents the number of images and the hidden dimension. To fuse the cross-modal visual and textual information, we employ a learnable linear layer hv = Wϕhv + bϕ
## 2.2.2 Textual Encoder
The textual module encodes the textual content T
and retrieved textual evidence Eentity, E*caption* of the post. For each sentence X = [x1, x2*, .., x*M] in the textual content T where two entities [E1] and
[E2] are mentioned, we follow the labeling schema adopted in Soares et al. (2019) and argument X
with four reserved tokens [E1], [/E1], [E2], [/E2]
to mark the beginning and the end of each entity mentioned in the sentence:
$$\begin{array}{c}{{X=\left[x_{1},...,[E_{1}],x_{i},...,x_{j-1},[/E_{1}],\right.}}\\ {{\left.\begin{array}{c}{{\left.\begin{array}{c}{{...,[E_{2}],x_{k},...,x_{l-1},[/E_{2}],...,x_{M}}\end{array}\right],}}\end{array}}}\end{array}}\end{array}\tag{1}$$
as the input token sequence. We adopt BERT (Devlin et al., 2019) as an encoder and obtain the textual embedding ht ∈ R
(M+4)×d, where M and d represents the number of tokens in s and the hidden dimensions. Thanks to informative visual 3https://developers.google.com/
custom-search/v1 embeddings, we can better capture the correlation between visual content and textual information.
## 2.2.3 Cross-Modal Selection
Given the encoded multimodal evidence and inputs h l t ∈ R
(M+4)×d, h l v ∈ R
n×d. The module selects visual/textual evidence and compares it against the input image/sentence. Inspired by Vaswani et al.
(2017), we leverage multi-head attention to perform the cross-modal selection. We first project the presentations as query, key, and value vectors:
$$\mathbf{Q}^{l},\mathbf{K}^{l},\mathbf{V}^{l}=\mathbf{x}\mathbf{W}^{l}_{q},\mathbf{x}\mathbf{W}^{l}_{k},\mathbf{x}\mathbf{W}^{l}_{v};\mathbf{x}\in\left\{\mathbf{h}^{l}_{t},\mathbf{h}^{l}_{v}\right\},\tag{2}$$
where Wlq,Wlk
,Wlv ∈ R
d×dh represent attention projection parameters. We then obtain the hidden features at (l + 1)-th layer via multi-head attention:
$$\begin{array}{l}{{\mathbf{h}_{t}^{l+1}=\mathrm{Attn}\left(\mathbf{Q}_{t}^{l},\left[\mathbf{K}_{v}^{l},\mathbf{K}_{t}^{l}\right],\left[\mathbf{V}_{v}^{l},\mathbf{V}_{t}^{l}\right]\right),}}\\ {{\mathbf{h}_{v}^{l+1}=\mathrm{Attn}\left(\mathbf{Q}_{v}^{l},\left[\mathbf{K}_{t}^{l},\mathbf{K}_{v}^{l}\right],\left[\mathbf{V}_{t}^{l},\mathbf{V}_{v}^{l}\right]\right).}}\end{array}\tag{3}$$
Note that the textual features ht come from two types: The first is the textual content in the post with two entities, so we get the relational features of the [E1]and [E2] positions. The other is retrieved textual evidence, since it does not have entities, we obtain representations of the CLS position:
$$\begin{array}{l}\mathbf{h}_{t,content}=\mbox{Avg.}(\mathbf{h}_{t,[E_{1}]},\mathbf{h}_{t,[E_{2}]}),\\ \mathbf{h}_{t,retrieved}=\mathbf{h}_{t,[CLS]}.\end{array}\tag{4}$$
where ht = {ht,content, ht,retrieved} ∈ R
dis the representation of the textual content and retrieved textual evidence for each post, where d is the embedding size 768. Similarly, we use a learnable linear layer ht = Wθht +bθ to change the dimension d from 768 to 2048 and employ the multi-head attention in Eq. 2, 3, and 4 to update the visual content and retrieved visual evidence.
## 2.2.4 Cross-Modal Consistency
This module aims to evaluate the consistency between the retrieved textual and visual evidence and the original post. A natural idea is to leverage the textual and visual content in the original post to update the retrieved textual and visual evidence.
We could obtain the updated evidence h*t,retrieved* and h*v,retrieved* with h*t,content* and h*v,content* as:
$$\begin{array}{c}{{\mathbf{h}_{t,r.}=\mathrm{softmax}(\frac{\mathbf{h}_{t,c.}\mathbf{W}_{t}\times(\mathbf{h}_{t,r.}\mathbf{W}_{t}^{\prime})^{T}}{\sqrt{d_{t}}})\mathbf{h}_{t,r.},}}\\ {{\mathbf{h}_{v,r.}=\mathrm{softmax}(\frac{\mathbf{h}_{t,c.}\mathbf{W}_{v}\times(\mathbf{h}_{v,r.}\mathbf{W}_{v}^{\prime})^{T}}{\sqrt{d_{v}}})\mathbf{h}_{v,r.},}}\end{array}\tag{5}$$
where Wt,W
′
t ∈ R
768×768 and Wv,W
′
v ∈
R
2048×2048 are trainable projection matrices and dt, dv are hyperparameters.
## 2.3 Classifier
We concatenate the resulting representations to form the final multimodal representations and leverage a feed-forward neural network to predict the relation:
$$\mathbf{\hbar}_{f i n a l}=\mathrm{FFNN}([\mathbf{h}_{t,c.};\mathbf{h}_{t,r.};\mathbf{h}_{v,c.};\mathbf{h}_{v,r.}]),\quad(6)$$
where h*f inal* is then fed into a linear layer followed by a softmax operation to obtain a probability distribution p ∈ R
m over m relation labels.
## 3 Experiments And Analyses 3.1 Experimental Setup
We evaluate the model on MNRE (Zheng et al.,
2021b), which contains 12,247/1,624/1,614 samples in train/dev/test sets, 9,201 images, and 23 relation types. Following prior efforts, we adopt Accuracy, Precision, Recall, and F1 as the evaluation metrics. For fair comparisons, all baselines and our method use ResNet50 (He et al., 2016) as the visual backbone and BERT-base (Devlin et al.,
2019) as the textual encoder. We computed the Accuracy and Macro F1 as the evaluation metric.
The hyper-parameters are chosen based on the development set. Results are reported with mean and standard deviation based on 5 runs. For the textual encoder of the retrieval-based model, we use the BERT-Base default tokenizer with a max-length of 128 to preprocess data. For the visual encoder of the retrieval-based model, we use ResNet 50 to encode the visual images. We scale the image proportionally so that the short side is 256, and crop the center to 224 ∗ 224. For the feed-forward neural network of the classifier, we set the layer dimensions as hR-1024-verification_labels, where hR = 768 ∗ 2 + 2048 ∗ 2. We use BertAdam with 3e-5 learning rate, warmup with 0.06 to optimize the cross-entropy loss and set the batch size as 16.
## 3.2 Baselines
We adopt two types of baselines:
Text-based Baselines only encode text content:
(1) PCNN (Zeng et al., 2015), (2) BERT (Devlin et al., 2019), and (3) MTB (Soares et al., 2019).
305
| Methods | Accuracy | Precision | Recall | F1 | |
|-----------------|------------|-------------|------------|------------|-------|
| PCNN | 73.15 | 62.85 | 49.69 | 55.49 | |
| Text | BERT | 74.42 | 58.58 | 60.25 | 59.40 |
| Based | MTB | 75.69 | 64.46 | 57.81 | 60.86 |
| UMT | 77.84 | 62.93 | 63.88 | 63.46 | |
| UMGF | 79.27 | 64.38 | 66.23 | 65.29 | |
| BSG | 77.15 | 62.95 | 62.65 | 62.80 | |
| MEGA | 80.05 | 64.51 | 68.44 | 66.41 | |
| VBERT | 73.97 | 57.15 | 59.48 | 58.30 | |
| MoRe | 79.87 | 65.25 | 67.32 | 66.27 | |
| Iformer | 92.38 | 82.59 | 80.78 | 81.67 | |
| HVPnet | 92.52 | 82.64 | 80.78 | 81.85 | |
| Ours | 93.54±0.16 | 85.03±0.14 | 84.25±0.17 | 84.64±0.16 | |
| w/o Object Evi. | 92.37±0.16 | 83.02±0.14 | 82.36±0.18 | 82.69±0.15 | |
| w/o Image Evi. | 92.83±0.15 | 83.44±0.18 | 83.15±0.15 | 83.29±0.17 | |
| w/o Visual Evi. | 92.72±0.17 | 82.78±0.19 | 83.63±0.24 | 83.20±0.21 | |
| w/o Selection | 92.75±0.16 | 82.81±0.14 | 83.44±0.16 | 83.12±0.16 | |
| w/o Consistency | 92.68±0.15 | 83.40±0.13 | 82.71±0.16 | 83.05±0.15 | |
| Multi modal | | | | | |
Multi-modal Baselines encode both text and image contents: (1) UMT (Yu et al., 2020) adopts the multimodal interaction module to obtain the token representations incorporated with visual information and visual representations. (2) UMGF (Zhang et al., 2021) adopts a unified multi-modal graph fusion method. (3) BSG (Zheng et al., 2021a) adopts the textual representation from BERT and the visual characteristics produced by the scene graph
(SG). (4) MEGA (Zheng et al., 2021b) adopts a dual graph, which could align multi-modal features between entities and objects to improve performance. (5) VBERT (Li et al., 2019) adopts the single-stream structure which is different from the attention-based methods. (6) MoRe (Wang et al.,
2022) obtains more textual information by retrieving images and titles, thereby improving the accuracy of relation classification and named entity recognition. (7) Iformer (Li et al., 2023a) increases the amount of information in the image by detecting the objects. (8) HVPnet (Chen et al., 2022)
treats visual representations as visual prefixes that can be inserted to guide textual representations of error-insensitive prediction decisions.
## 3.3 Main Results
Table 1 shows the mean and standard deviation results with 5 runs of training and testing on MRNE.
We first compare text-based and multi-modal baselines and observe the performance improvement after incorporating visual content, indicating that images can help reveal the potential relationship between two entities. For the multi-modal model, Iformer (Li et al., 2023a) and HVPnet (Chen et al.,
2022) specifically detect the objects in the image and achieve the average 17.23% F1 and 14.15% Ac-
![3_image_0.png](3_image_0.png)
curacy compared with other multi-modal baselines.
Therefore, we retrieve textual and visual evidence based on the object, sentence, and whole image, and achieve an average of 2.79% F1 and 1.02% Accuracy gains compared to the best-reported model HVPnet. Thanks to the retrieved visual and textual evidence, the text and image content in the original post is further explained, which helps our model obtain valuable clues to classify the relations between two entities.
## 3.4 Analysis And Discussion
Ablation Study. We conduct an ablation study to show the effectiveness of different modules of our model on the test set. Ours *w/o Object Evidence* and Ours *w/o Image Evidence* remove the descriptions of Objects and Images respectively in the retrieved textual evidence. Correspondingly, Ours w/o Visual Evidence removes the visual evidence for text content retrieval. The results from Table 1 demonstrate that the three types of evidence can bring 1.95%, 1.35%, and 1.44% F1 improvements, respectively. Among them, the textual evidence obtained from the object retrieval brings the greatest benefit, which is also related to the potential entity information contained in the object. The removal of the *Cross-Modal Selection* and *Cross-Modal Consistency* modules means that we no longer use the appropriate evidence selection and update the retrieved evidence with the original content, which increases the noise from irrelevant evidence and leads to 1.52% F1 and 1.59% F1 down.
Analyze the Impact of Evidence. In Figure 3, we vary the numbers of retrieved visual and textual evidence from 1 ∼ 20 and report the F1 on the test set. The fluctuation results indicate that both the quantity and quality of retrieved evidence affect the performance. Using less textual or vi-
![4_image_0.png](4_image_0.png)
sual evidence cannot bring enough explanation to the original post, which leads to a decrease in the quality of the model classification. Using too much evidence will introduce false or irrelevant evidence noise, affecting performance. However, no matter how much evidence is adopted, our method consistently outperforms HVPnet, which illustrates the effectiveness of adding evidence. In our model, we adopt 10 textual and visual evidence for each post to achieve the best performance. We believe the Cross-Modal Consistency module can alleviate the irrelevant noise so that the model can obtain helpful auxiliary evidence.
Analyze Performance Changes in Tail Relations.
We select the tail relations with the least number of data among the 23 relation classes in MNRE, and study their F1 performance changes after adding retrieval evidence in Figure 4. Compared with the 2.79% improvement brought by the evidence on all relations, we find that almost all tail relations can get more than 22.68% F1 improvement (46.28 vs. 68.96), which shows that the retrieved evidence is more helpful for the few-shot tail relation types:
It is an attractive property in real-world applications since classes of tail relations are usually more difficult to obtain training labeled data to improve.
## 4 Related Work
Relation extraction has garnered considerable interest in the research community due to its essential role in various natural language processing applications (Guo et al., 2019; Nan et al., 2020; Hu et al., 2021b,a). The initial efforts in this field focused on detecting relations between entities in the text, with different neural architectures (Zeng et al.,
2015; Zhang et al., 2017; Guo et al., 2020) and pretrained language models (Soares et al., 2019; Devlin et al., 2019) used to encode the textual information. Multimodal relation extraction has recently been proposed, where visual clues from images are used to enhance entity representations (Zheng et al., 2021a,b; Chen et al., 2022; Wang et al., 2022).
Most existing efforts focus on fusing the visual and textual modalities effciently. Zheng et al. (2021b)
constructed the dual modality graph to align multimodal features among entities and objects. Chen et al. (2022) concatenated object-level visual representation as the prefix of each self-attention layer in BERT. Li et al. (2023a) introduced a fine-grained multimodal fusion approach to align visual and textual objects in representation space. Closest to our work, Wang et al. (2022) proposed to retrieve textual information related to the entities based on the given image and sentence. Unlike prior efforts, we not only retrieve texts related to entities but also retrieve visual and textual evidence related to the object, sentence, and entire image. We further synthesize the retrieved object-level, image-level, and sentence-level information for better reasoning between the same and different modalities.
## 5 Conclusion And Future Work
We propose to retrieve multimodal evidence and model the interactions among the object, sentence, and whole image for better relation extraction. Experiments show that the proposed method achieves competitive results on MNRE. For future research directions, we can utilize open-source image search and caption generation tools to retrieve textual and image evidence. For example, to retrieve visual evidence, one can (1) use a web crawler to search Google Images, or (2) utilize a searchable image database: PiGallery4, where images can be sourced from Open Image Dataset5, which contains ∼9 million images. For retrieving textual evidence, one can use CLIP to generate image captions. Moreover, we can also apply the method of multimodal retrieval to low-resource relation extraction (Hu et al., 2020; Liu et al., 2022b; Hu et al., 2023), natural language inference (Li et al., 2023b, 2022), semantic parsing (Liu et al., 2022a, 2023), and other NLP tasks, thus realizing information enhancement based on images and retrieval.
## 6 Limitation
In this paper, we suggest incorporating textual and visual data from search engines for multimodal relation extraction. Despite the fact that the proposed model yields competitive results on the benchmark, it still has several limitations. Firstly, using a search engine is a feasible way to obtain related knowledge, but it also brings the issue of noisy evidence.
Unrelated visual and textual evidence returned by the search engine may lead to incorrect predictions from the model. Additionally, not all the retrieved evidence is equally reliable, and sometimes sources may contradict each other. On the other hand, retrieval-augmented methods are slower than content-based counterparts, since retrieving evidence from the Internet requires extra time. Therefore, it may not satisfy some of the time-sensitive scenarios. Lastly, evidence may be presented in different forms other than texts and images. For instance, structural information such as tables, info lists, and knowledge graphs also provide important contexts for identifying semantic relations. Humans are able to extract relevant information from these heterogeneous sources for inference, while our relation extraction system can only model and reason over textual and visual evidence.
## 7 Acknowledgement
We thank the reviewers for their valuable comments. The work described here was partially supported by grants from the National Key Research and Development Program of China (No.
2018AAA0100204) and from the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14222922, RGC GRF,
No. 2151185), NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941.
Zhiyang Teng was partially supported by CAAIHuawei MindSpore Open Fund (CAAIXSJLJJ2021-046A).
## References
Xiang Chen, Ningyu Zhang, Lei Li, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Good visual guidance make a better extractor: Hierarchical visual prefix for multimodal entity and relation extraction. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1607–1618, Seattle, United States. Association for Computational Linguistics.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248–255. IEEE Computer Society.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Zhijiang Guo, Guoshun Nan, Wei Lu, and Shay B. Cohen. 2020. Learning latent forests for medical relation extraction. In *Proceedings of the Twenty-Ninth* International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3651–3657. ijcai.org.
Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL
2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 241–251. Association for Computational Linguistics.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE
Computer Society.
Xuming Hu, Zhaochen Hong, Chenwei Zhang, Irwin King, and Philip S Yu. 2023. Think rationally about what you see: Continuous rationale extraction for relation extraction. *arXiv preprint arXiv:2305.03503*.
Xuming Hu, Lijie Wen, Yusong Xu, Chenwei Zhang, and Philip S. Yu. 2020. Selfore: Self-supervised relational feature learning for open relation extraction.
In *Proc. of EMNLP*, pages 3673–3682.
Xuming Hu, Chenwei Zhang, Fukun Ma, Chenyao Liu, Lijie Wen, and Philip S. Yu. 2021a. Semi-supervised relation extraction via incremental meta self-training.
In *Findings of EMNLP*, pages 487–496.
Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen, and Philip S. Yu. 2021b. Gradient imitation reinforcement learning for low resource relation extraction. In *Proc. of EMNLP*, pages 2737–
2746.
Lei Li, Xiang Chen, Shuofei Qiao, Feiyu Xiong, Huajun Chen, and Ningyu Zhang. 2023a. On analyzing the role of image for visual-enhanced relation extraction. In *In Proc. of AAAI*.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language.
arXiv preprint arXiv:1908.03557.
Shu'ang Li, Xuming Hu, Li Lin, Aiwei Liu, Lijie Wen, and Philip S. Yu. 2023b. A multi-level supervised contrastive learning framework for low-resource natural language inference. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31:1771–
1783.
Shu'ang Li, Xuming Hu, Li Lin, and Lijie Wen.
2022. Pair-level supervised contrastive learning for natural language inference. *arXiv preprint* arXiv:2201.10927.
Aiwei Liu, Xuming Hu, Li Lin, and Lijie Wen. 2022a.
Semantic enhanced text-to-sql parsing via iteratively learning schema linking graph. In *Proc. of KDD*,
pages 1021–1030.
Aiwei Liu, Xuming Hu, Lijie Wen, and Philip S
Yu. 2023. A comprehensive evaluation of chatgpt's zero-shot text-to-sql capability. arXiv preprint arXiv:2303.13547.
Shuliang Liu, Xuming Hu, Chenwei Zhang, Shu'ang Li, Lijie Wen, and Philip S. Yu. 2022b. Hiure: Hierarchical exemplar contrastive learning for unsupervised relation extraction. In *Proc. of NAACL-HLT*, pages 5970–5980.
Guoshun Nan, Zhijiang Guo, Ivan Sekulic, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1546–1557. Association for Computational Linguistics.
Livio Baldini Soares, Nicholas Fitzgerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks:
Distributional similarity for relation learning. In In Proc. of ACL, pages 2895–2905.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural
Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Xinyu Wang, Jiong Cai, Yong Jiang, Pengjun Xie, Kewei Tu, and Wei Lu. 2022. Named entity and relation extraction with multi-modal retrieval. In In Proc. of EMNLP.
Zhengyuan Yang, Boqing Gong, Liwei Wang, Wenbing Huang, Dong Yu, and Jiebo Luo. 2019. A fast and accurate one-stage approach to visual grounding. In In Proc. of ICCV, pages 4683–4693.
Jianfei Yu, Jing Jiang, Li Yang, and Rui Xia. 2020.
Improving multimodal named entity recognition via entity span detection with unified multimodal transformer. In *In Proc. of ACL*, pages 3342–3352.
Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao.
2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In *In Proc.*
of EMNLP, pages 1753–1762.
Dong Zhang, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu, and Guodong Zhou. 2021. Multimodal graph fusion for named entity recognition with targeted visual guidance. In *In Proc. of AAAI*, volume 35, pages 14347–14355.
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP
2017, Copenhagen, Denmark, September 9-11, 2017, pages 35–45. Association for Computational Linguistics.
Changmeng Zheng, Junhao Feng, Ze Fu, Yi Cai, Qing Li, and Tao Wang. 2021a. Multimodal relation extraction with efficient graph alignment. In *MM '21:*
ACM Multimedia Conference, Virtual Event, China, October 20 - 24, 2021, pages 5298–5306. ACM.
Changmeng Zheng, Zhiwei Wu, Junhao Feng, Ze Fu, and Yi Cai. 2021b. MNRE: A challenge multimodal dataset for neural relation extraction with visual evidence in social media posts. In 2021 IEEE International Conference on Multimedia and Expo, ICME
2021, Shenzhen, China, July 5-9, 2021, pages 1–6.
IEEE.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2, Section 3
✓ B1. Did you cite the creators of artifacts you used?
Section 2, Section 3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 2, Section 3
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 2, Section 3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2, Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
harrigian-etal-2023-characterization | Characterization of Stigmatizing Language in Medical Records | https://aclanthology.org/2023.acl-short.28 | Widespread disparities in clinical outcomes exist between different demographic groups in the United States. A new line of work in medical sociology has demonstrated physicians often use stigmatizing language in electronic medical records within certain groups, such as black patients, which may exacerbate disparities. In this study, we characterize these instances at scale using a series of domain-informed NLP techniques. We highlight important differences between this task and analogous bias-related tasks studied within the NLP community (e.g., classifying microaggressions). Our study establishes a foundation for NLP researchers to contribute timely insights to a problem domain brought to the forefront by recent legislation regarding clinical documentation transparency. We release data, code, and models. | # Characterization Of Stigmatizing Language In Medical Records
Keith Harrigian†, Ayah Zirikly†, Brant Chee⋆‡**, Alya Ahmad**⋆,
Anne R. Links⋆, Somnath Saha⋆, Mary Catherine Beach⋆**, Mark Dredze**†
†Department of Computer Science, ‡Applied Physics Laboratory, ⋆School of Medicine Johns Hopkins University Baltimore, MD
{kharrig5,azirikl1,aahmad24}@jhu.edu, [email protected]
{alinks1,ssaha9,mcbeach}@jhmi.edu, [email protected]
## Abstract
Widespread disparities in clinical outcomes exist between different demographic groups in the United States. A new line of work in medical sociology has demonstrated physicians often use stigmatizing language in electronic medical records within certain groups, such as black patients, which may exacerbate disparities. In this study, we characterize these instances at scale using a series of domain-informed NLP
techniques. We highlight important differences between this task and analogous bias-related tasks studied within the NLP community (e.g.,
classifying microaggressions). Our study establishes a foundation for NLP researchers to contribute timely insights to a problem domain brought to the forefront by recent legislation regarding clinical documentation transparency.
We release data, code, and models.1
## 1 Introduction
Widespread and well-documented disparities in healthcare outcomes between demographic groups exist within the United States (Baciu et al., 2017; Zavala et al., 2021). The sources of these disparities are diverse and complex, with numerous interacting factors contributing to worse outcomes for minority patients (Bell and Lee, 2011; Williams et al.,
2019). One source of disparities may stem from latent biases of healthcare providers (Hall et al.,
2015). Multiple studies have highlighted the tendency for providers to prescribe different treatment plans to black patients compared to white patients despite having similar clinical dispositions (Nelson, 2002; Green et al., 2007; Hoffman et al., 2016). Elevated implicit bias scores have been associated with these decisions and have been further linked with decreased levels of patient-provider communication (Van Ryn et al., 2011; Cooper et al., 2012).
A major challenge with these biases is that they are invoked unconsciously.
1github.com/kharrigian/ehr-stigma A new line of work in medical sociology has explored this issue through the lens of clinical documentation (Beach et al., 2021), in which bias may be exhibited in how medical providers describe and document patient interactions in the medical record.
In particular, studies have shown physicians often use language that has subtle, stigmatizing connotations (Wolsiefer et al., 2021). This documentation practice may not only negatively frame patients to future providers and thus influence their quality of care, but also discourage patients from seeking treatment altogether (Goddu et al., 2018; Werder et al., 2022). The latter is especially pertinent given the passage of the 21st Century Cures Act that mandates clinical notes are freely accessible by patients in the US (Blease et al., 2021; Harris et al., 2022).
How is stigmatizing language in medical records different from other forms of abusive language?
Prior studies of stigmatizing language in clinical notes have relied on qualitative methods (Park et al.,
2021) or refrained from analyzing computational nuances of the problem domain (Sun et al., 2022).
Modeling tasks such as hate-speech detection (Jahan and Oussalah, 2021; Garg et al., 2022) and analyses of social bias encoded within language models (Liang et al., 2021) share many similarities with characterizing stigmatizing language in medical records. However, it is not clear *a priori* where the task of characterizing stigmatizing language in medical records falls within the broader abusive language landscape.
In this paper, we demonstrate that characterization of stigmatizing language in medical records most strongly parallels the characterization of linguistic microaggressions (Sue et al., 2007). However, unlike traditional microaggressions, biased language in the clinical domain is concentrated in unremarkable phrases and lacks any indication of the targeted identity group. Our analysis establishes a foundation for a novel task that has high importance to both patients and clinicians.
312
## 2 Stigmatizing Language In Medical Records As Abusive Language
Clinical stigmatizing language lies in the *implicit* and *directed* quadrant of the typology of abusive language introduced by Waseem et al. (2017).
Physicians generally use a vocabulary of commonplace terms and phrases which have negative implications only when interpreted in certain contexts or by other physicians (Valdez, 2021; Beach et al.,
2021). This language almost always places the patient as the target of the stigma, even if they are not the intended recipient (Ho et al., 2014).
Stigmatizing language in medical records shares many similarities with linguistic microaggressions.
Both reflect an unconscious bias internalized by the speaker and materialized through thinly veiled innuendo (Sue et al., 2007; Raney et al., 2021). This innuendo is not necessarily negative in affect (Glick and Fiske, 2001; McMahon and Kahn, 2016).
One major difference between stigmatizing language in the clinical domain and other forms of abusive language is the notion of *necessity*. Whereas most abusive language is better left unsaid, clinicians have a responsibility to document their interaction with patients (Shanley et al., 2009). Often, this requires that they characterize sociallystigmatized circumstances (e.g., substance use disorders) and medically-relevant patient eccentricities (e.g., unfounded social histories). Minor differences in phrasing may have a large impact on whether a statement is stigmatizing to patients.
The idea of stigmatizing language in medical records is relatively new, with Goddu et al. (2018)
providing the first qualitative evidence of negative language in the medical record. Using word counts, Beach et al. (2021) and Himmelstein et al. (2022)
later identified a higher prevalence of implicit bias within records of black patients than white patients.
Sun et al. (2022) was the first to use machine learning to analyze stigmatizing language in medical records. The authors identified sentences with possible bias using a manually-curated word list and then annotated whether each match was positive, negative, or out-of-context. A logistic regression classifier trained on a bag-of-words representation of the text achieved good performance (F1 of 0.935). Unfortunately, the authors did not provide a baseline to indicate how valuable context around the seed terms is for classification.
The more general task of identifying biased and abusive language in text has garnered much attention from researchers in recent years (Schmidt and Wiegand, 2017; Yin and Zubiaga, 2021). Breitfeller et al. (2019) was the first to computationally analyze microaggressions. The majority of microaggression research published thereafter has remained confined to using web data (Lees et al.,
2021; Sabri et al., 2021). Our study provides an analysis of stigma in an important linguistic domain that differs dramatically from those currently studied in the covert bias research space.
## 3 Data
We consider two clinical datasets. In addition to covering different clinical specialties, they also feature different demographic compositions.
JHM We retrospectively acquired a dataset of 128,343 English-language progress notes written by physicians across 5 clinical specialties within the Johns Hopkins Medicine (JHM) hospital system - Internal Medicine, Emergency Medicine, Pediatrics, OB-GYN, and Surgery. Notes were processed in accordance with our institution's privacy policy after approval by our Institutional Review Board (IRB). Because the notes contain sensitive identifiable information, they are unable to be shared beyond our study team.
MIMIC To encourage future research, we also include in our study the publicly-accessible MIMIC-IV-Note dataset (v2.2) (Johnson et al.,
2023). This recently released extension of the widely-adopted MIMIC-III dataset (Johnson et al.,
2016) consists of deidentified free-text clinical notes for patients admitted to an intensive care unit
(ICU) or the emergency department at Beth Israel Deaconess Medical Center in Boston, MA. We focus on the 331,794 available discharge summaries, having found minimal evidence of stigmatizing language in the associated radiology reports.
## 3.1 Annotation
Like Sun et al. (2022), we develop a two-stage process to detect and characterize stigmatizing language in clinical notes. Possible instances of bias are first identified using *anchor* n-grams and then classified using a machine learning classifier. We take the union of n-grams curated by Beach et al.
(2021) and Sun et al. (2022) as our anchor set.
Unlike the single, sentiment-like classification task considered by Sun et al. (2022), we formulate three independent classification tasks that discriminate between instances of bias based on impact.
1. **Credibility & Obstinacy** (Disbelief, Difficult, Exclude): insinuation of doubt regarding a patient's testimony or describes the patient as obstinate.
2. **Compliance** (Negative, Neutral, Positive): patient does not appear to follow medical advice.
3. **Descriptors** (Negative, Neutral, Positive, Exclude): evaluates descriptions of patient behavior and demeanor.
We ran our anchor list against both datasets, caching each match and up to 10 words to the left and right which make up its context. A team of annotators (research assistant and physician coauthors) labeled a random sample of 5,201 and 5,043 instances from the JHM and MIMIC datasets, respectively. All instances in the JHM dataset and the majority of the instances in the MIMIC dataset were labeled independently by at least two annotators.2 We include annotator agreement measures, the label distribution, and full task taxonomy with examples in Appendix A.
## 4 Characterizing Stigmatizing Language 4.1 What Role Does Context Play In Characterizing Stigmatizing Language?
Some forms of abusive language are stigmatizing in isolation, while others critically depend on context to invoke meaning (Waseem et al., 2017). Prior work has not provided insight regarding where stigmatizing language in medical records lies on this spectrum (Sun et al., 2022). We hypothesize that context around a stigmatizing instance is necessary, but insufficient, for characterizing the utterance.
Methods We test our hypothesis by varying feature representations such that they encode different degrees of the stigmatizing anchor term and its surrounding context. We consider 3 classes of models.
The first two classes allow us to understand the interaction between context and the anchor n-grams in an additive manner. The third class captures more complex dynamics between anchor n-grams and their context. Additional training and evaluation details are included in Appendix B.
1. **Majority**: Majority class and majority class conditioned on anchor n-gram.
2. **Logistic Regression (LR)**: TF-IDF representations. One version with the anchor n-gram and one without.
2A small number of instances from MIMIC were labeled by a single annotator after observing high agreement scores.
3. **BERT**: One version trained on web data (Devlin et al., 2018) and one version trained on clinical notes (Alsentzer et al., 2019).
We also compare four methods of pooling BERT's final hidden layer for input into the task classification head.
1. **Anchor Mean**: Arithmetic mean of tokens
(subwords) composing the anchor n-gram.
2. CLS: Embedding for the classification token.
3. **Sentence Mean**: Arithmetic mean of all tokens in the instance, excluding special tokens.
4. **BERT Pooler**: Weighted pooling of all tokens; weights learned at training time.
Results The final four rows in Table 1 show clinical BERT's test-set macro F1-score for each pooling method across the three classification tasks; the web version of BERT performs similarly. Although not always statistically significant, the anchored pooling method consistently outperforms the alternative pooling approaches across all tasks and datasets. Under this setting, the classification head lacks direct access to information in each anchor's context window. Classification performance can be thought of as a measure of how well the closed set of anchor n-grams are separated in semantic space.
That the anchor pooling approach outperforms the alternative methods suggests characterizing stigmatizing language in medical records can be thought of as a word-sense-disambiguation task more than a sequence classification task.
The majority and logistic regression model outcomes (first four rows of Table 1) lend additional support to this claim. We see that anchors used as classification criteria in isolation provide a significant improvement over the majority overall model in all cases. The context window used in isolation provides a relatively smaller increase in performance over the majority overall model. Jointly modeling the anchors and their context achieves the largest improvement over the majority overall model in 4 of 6 tasks. This outcome suggests that both subsets of text provide different, but complementary, information.
The BERT models effectively capture the interaction between anchors and their surrounding context. Fine-tuning both BERT models significantly increases macro F1 over the best non-BERT model in all settings. Interestingly, the difference in performance between the web and clinical BERT models
| Credibility & Obstinacy | Compliance | Descriptors | | | | |
|---------------------------|--------------|---------------|-------------|-------------|-------------|-------------|
| Model | JHM | MIMIC | JHM | MIMIC | JHM | MIMIC |
| Majority Overall | 0.21 ± 0.00 | 0.17 ± 0.00 | 0.29 ± 0.00 | 0.24 ± 0.00 | 0.16 ± 0.00 | 0.19 ± 0.00 |
| Majority Per Anchor | 0.67 ± 0.10 | 0.55 ± 0.04 | 0.68 ± 0.04 | 0.73 ± 0.01 | 0.82 ± 0.01 | 0.83 ± 0.00 |
| LR (Context) | 0.60 ± 0.05 | 0.58 ± 0.04 | 0.55 ± 0.01 | 0.68 ± 0.02 | 0.74 ± 0.03 | 0.60 ± 0.04 |
| LR (Context + Anchor) | 0.69 ± 0.02 | 0.65 ± 0.03 | 0.68 ± 0.04 | 0.80 ± 0.02 | 0.86 ± 0.02 | 0.76 ± 0.05 |
| Bert (Web) | 0.85 ± 0.04 | 0.76 ± 0.02 | 0.86 ± 0.01 | 0.92 ± 0.02 | 0.93 ± 0.01 | 0.86 ± 0.01 |
| Bert (Clinical) | 0.89 ± 0.03 | 0.78 ± 0.03 | 0.85 ± 0.02 | 0.92 ± 0.02 | 0.93 ± 0.02 | 0.86 ± 0.01 |
| - CLS Token | 0.89 ± 0.04 | 0.69 ± 0.03 | 0.84 ± 0.03 | 0.92 ± 0.01 | 0.90 ± 0.01 | 0.84 ± 0.03 |
| - Sentence Mean | 0.85 ± 0.06 | 0.69 ± 0.06 | 0.84 ± 0.03 | 0.92 ± 0.01 | 0.91 ± 0.01 | 0.84 ± 0.02 |
| - BERT Pooler | 0.83 ± 0.08 | 0.70 ± 0.07 | 0.84 ± 0.02 | 0.91 ± 0.02 | 0.89 ± 0.03 | 0.80 ± 0.03 |
is not significant. We hypothesize that understanding social bias may be more important than understanding clinical jargon for our tasks, but leave this as an open question for future work.
## 4.2 Is Stigma Conveyed In The Same Manner About Different Demographic Groups?
The majority of bias-related tasks in NLP examine language which, while covert, contains some indication of the targeted demographic of identity group (e.g., racial slurs, sexist microaggressions) (Sue, 2010; Waseem et al., 2017). Here, we show that stigmatizing language in medical records uniquely *does not* target any racial group or sex.
Methods Results from §4.1 verify that our BERT
encoders learn semantic representations of the anchor n-grams which are informative for the downstream stigma characterization tasks. If language is used differently for different demographic groups, we expect the encoders to reflect this (Adam et al.,
2022). We can test our hypothesis by attempting to infer a patient's self-reported race and sex using each anchor n-gram's BERT representation.
Because our datasets represent a concatenation of notes from multiple clinical specialties which each have a unique demographic pool, it's possible to conflate the encoding of specialty knowledge with demographic knowledge. Additionally, any differences in the prevalence of our anchor n-grams or their associated labels between demographic groups could be exploited by a classifier.
For this reason, we ground inference performance against baselines which model one-hot-encoded representations of the anchor n-gram, clinical speciality, and the primary classification task label.
We also consider a version of the anchor embeddings generated after replacing gender-indicative pronouns (e.g., himself, her) and other identifiers with non-uniform gender associations (e.g., woman, husband) with gender-neutral alternatives. As before, additional experimental details are included in the appendix.
Results We present demographic inference results for the JHM dataset in Table 2 and report MIMIC results in the appendix. Across all but one experimental setting, inference performance achieved using the gender-neutral version of the embeddings is not significantly different from what is achieved by the metadata-only baselines. This trend suggests that the learned embeddings encode little to no information about a patient's race or sex that cannot be explained by underlying differences in prevalence between patient populations.
Future work is necessary to understand whether there exist semantic differences along other axes
(e.g., socioeconomic status, substance use, obesity)
(Healy et al., 2022).
## 4.3 Is Stigma Conveyed In The Same Manner Across Different Patient Populations?
Machine learning models trained on one distribution often experience a loss in performance when evaluated on a different distribution (Blitzer et al.,
2006; Harrigian et al., 2020). Understanding the causes of this loss is necessary for ensuring systems do not exacerbate existing social disparities
(Bender et al., 2021). Here, we identify specialityspecific nuances in stigmatizing language and highlight limitations of anchor-focused modeling.
Methods We evaluate models trained using the JHM dataset in §4.1 on the test set of the MIMIC
dataset, and vice-versa. We also conduct a qualitative error analysis to understand how stigmatizing language differs between the two datasets.
| Credibility & Obstinacy | Compliance | Descriptors | | | | |
|-----------------------------|--------------|---------------|-------------|-------------|-------------|-------------|
| Model | Sex | Race | Sex | Race | Sex | Race |
| Majority Baseline | 0.37 ± 0.01 | 0.26 ± 0.02 | 0.37 ± 0.02 | 0.29 ± 0.01 | 0.35 ± 0.02 | 0.26 ± 0.01 |
| Anchor | 0.50 ± 0.04 | 0.31 ± 0.05 | 0.42 ± 0.02 | 0.29 ± 0.01 | 0.50 ± 0.02 | 0.30 ± 0.03 |
| Label | 0.37 ± 0.01 | 0.27 ± 0.03 | 0.37 ± 0.02 | 0.29 ± 0.01 | 0.46 ± 0.07 | 0.26 ± 0.01 |
| Specialty | 0.44 ± 0.04 | 0.36 ± 0.04 | 0.53 ± 0.04 | 0.29 ± 0.01 | 0.58 ± 0.03 | 0.32 ± 0.03 |
| Anchor × Label | 0.50 ± 0.03 | 0.31 ± 0.05 | 0.46 ± 0.03 | 0.30 ± 0.01 | 0.53 ± 0.02 | 0.32 ± 0.04 |
| Anchor × Speciality | 0.51 ± 0.04 | 0.38 ± 0.03 | 0.54 ± 0.02 | 0.35 ± 0.03 | 0.56 ± 0.04 | 0.34 ± 0.02 |
| Label × Speciality | 0.47 ± 0.04 | 0.38 ± 0.04 | 0.53 ± 0.05 | 0.32 ± 0.02 | 0.58 ± 0.04 | 0.32 ± 0.03 |
| Anchor × Label × Speciality | 0.54 ± 0.01 | 0.35 ± 0.03 | 0.54 ± 0.03 | 0.36 ± 0.02 | 0.55 ± 0.03 | 0.36 ± 0.02 |
| Embedding | 0.76 ± 0.02 | 0.34 ± 0.02 | 0.57 ± 0.01 | 0.36 ± 0.02 | 0.61 ± 0.04 | 0.34 ± 0.03 |
| Embedding (Gender Neutral) | 0.59 ± 0.02 | 0.34 ± 0.06 | 0.52 ± 0.01 | 0.35 ± 0.01 | 0.52 ± 0.03 | 0.34 ± 0.02 |
| Credibility & Obstinacy | Compliance | Descriptors | | | | | |
|---------------------------|--------------|---------------|-------------|-------------|-------------|-------------|-------------|
| Target → | JHM | MIMIC | JHM | MIMIC | JHM | MIMIC | |
| Source ↕ | JHM | 0.89 ± 0.03 | 0.70 ± 0.01 | 0.85 ± 0.02 | 0.86 ± 0.03 | 0.93 ± 0.02 | 0.81 ± 0.03 |
| MIMIC | 0.81 ± 0.03 | 0.78 ± 0.03 | 0.82 ± 0.02 | 0.92 ± 0.02 | 0.89 ± 0.03 | 0.86 ± 0.01 | |
Results We observe consistent drops in performance when models are evaluated in a different domain than which they were trained (i.e., Table 3).
This performance loss is significant in all 6 transfer settings. What causes this loss? Are there spurious artifacts to which our models overfit (Wang et al.,
2022)? Or does each dataset contain unique stigmatizing language that arises disproportionately across patient populations?
Although many transfer errors can be attributed to differences in each dataset's joint anchor-label distribution, some special cases emerge. For example, models trained on the JHM dataset incorrectly characterize instances in MIMIC which describe parties secondary to the patient (e.g., family). This situation is more common in the MIMIC dataset due to ICU patients often being incapacitated. Models trained on the JHM dataset also struggle with statements in MIMIC from Psych ICU notes, where patients frequently describe their own behavior.
One on hand, these shortcomings appear to be a consequence of covariate shift (Sugiyama et al.,
2007), for which many general mitigation strategies exist (Ramponi and Plank, 2020). On the other hand, each of the errors we observe presents a unique linguistic challenge that may be better handled using targeted interventions. Few-shot word sense disambiguation techniques may improve transfer for low-volume anchor-label pairs (Kumar et al., 2019; Scarlini et al., 2020), while augmented annotations may reduce speaker/receiver confusion
(Rashkin et al., 2016; Hovy and Yang, 2021).
## 5 Discussion
The covert, highly contextual, and nondemographically aligned nature of stigmatizing language in medical records places it in a unique area of the abusive language research landscape.
The current reliance on domain experts to identify possible instances of bias using anchor terms is limiting given the adversarial relationship been abusive language and speakers (Nobata et al.,
2016). It also does not address abstract forms of stigma (Kopera et al., 2015) or stigmatizing pragmatics (Beach and Saha, 2021).
Methods for discovering stigmatizing language in medical records are poised to be highly impactful (Field and Tsvetkov, 2020). Counterfactual analyses may be instrumental for better characterizing the nuance between stigmatizing and non-stigmatizing clinical language (Kaushik et al.,
2019). Whether these nuances are uniform across patient populations (e.g., hospital systems, regions)
and providers (e.g., nurses, resident physicians) remains an open question not answerable from our datasets alone. Likewise, future work is necessary to understand whether clinical knowledge is necessary for models in this domain (Roberts, 2016).
## Ethics Statement
Our datasets were collected from real patients, contain protected health information (PHI), and are subject to HIPAA regulations. As a result, we took the utmost care to maintain data integrity and privacy. First, we obtained IRB approval to access and process the data. Second, we obtained permission and approval for all applications and libraries used to process the data. Third, data storage and computational experimentation was done on IRBapproved platforms.
## Limitations
In our work we faced numerous types of limitations that fall under different categories.
Data Our relatively small dataset size limits our analysis, especially with the use of language models. Furthermore, the label distribution is skewed across the different specialties (domains), which affects model performance, robustness and generalizability. The differences in distribution might be the result of how the data was collected, which was not in light of the anchor words, or due to the domain's nature and/or the medical providers' language of that specialty. Furthermore, the time frame that the data was sampled from might manifest certain biases that are different from other time frames. Finally, our datasets are only representative of a small number of specialties from two medical institutions. Patient populations and providers may vary greatly across medical fields and additional institutions.
Task The formulation of the labels for our task imposes limitations and challenges. Stigmatizing language is subjective and can vary between the perspective of the patient and the medical provider.
As a result, we are aware that our medical experts' annotations might impose a bias. Additionally, the negative connotations of language might be ambiguous and can change depending on a medical expert's identity, background and specialty, which creates a bias that is hard to mitigate.
Computational Resources We only used IRBapproved servers to access the dataset and perform the experiments. Because these platforms had limited computational capacity and lacked the specifications required to build more complex neural models, we were not able to include more recent language models in our experiments that might have yielded better performance. In the future, we hope to have access to machines that support more recent and state-of-the-art models.
## Acknowledgements
This work was supported by the National Institute on Minority Health and Health Disparities under grant number R01 MD017048. The content is solely the responsibility of the authors and does not necessarily represent the official views of NIMHD, NIH, or Johns Hopkins University.
## References
Hammaad Adam, Ming Ying Yang, Kenrick Cato, Ioana Baldini Soares, Charles Senteio, Jiaming Zeng, Moninder Singh, and Marzyeh Ghassemi. 2022.
Write it like you see it: Detectable differences in clinical notes by race lead to differential model recommendations. In *AAAI/ACM Conference on AI,*
Ethics, and Society.
Emily Alsentzer, John R Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. *arXiv preprint arXiv:1904.03323*.
Alina Baciu, Yamrot Negussie, Amy Geller, James N
Weinstein, National Academies of Sciences, Engineering, and Medicine, et al. 2017. The state of health disparities in the united states. In *Communities in action: Pathways to health equity*. National Academies Press (US).
Mary Catherine Beach and Somnath Saha. 2021. Quoting patients in clinical notes: First, do no harm. *Annals of internal medicine*, 174(10):1454–1455.
Mary Catherine Beach, Somnath Saha, Jenny Park, Janiece Taylor, Paul Drew, Eve Plank, Lisa A Cooper, and Brant Chee. 2021. Testimonial injustice: linguistic bias in the medical records of black patients and women. *Journal of general internal medicine*,
36(6):1708–1714.
Judith Bell and Mary M Lee. 2011. Why place and race matter: Impacting health through a focus on race and place. *Oakland, CA: PolicyLink*.
Emily M Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM conference on fairness, accountability, and transparency*,
pages 610–623.
Charlotte Blease, Jan Walker, Catherine M DesRoches, and Tom Delbanco. 2021. New us law mandates access to clinical notes: implications for patients and clinicians.
John Blitzer, Ryan McDonald, and Fernando Pereira.
2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, pages 120–128.
Luke Breitfeller, Emily Ahn, David Jurgens, and Yulia Tsvetkov. 2019. Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts. In *Proceedings of the 2019 conference* on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 1664–
1674.
Lisa A Cooper, Debra L Roter, Kathryn A Carson, Mary Catherine Beach, Janice A Sabin, Anthony G
Greenwald, and Thomas S Inui. 2012. The associations of clinicians' implicit attitudes about race with medical visit communication and patient ratings of interpersonal care. *American journal of public health*,
102(5):979–987.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Hannah Eyre, Alec B Chapman, Kelly S Peterson, Jianlin Shi, Patrick R Alba, Makoto M Jones, Tamara L
Box, Scott L DuVall, and Olga V Patterson. 2021.
Launching into clinical space with medspacy: a new clinical text processing toolkit in python. In AMIA
Annual Symposium Proceedings, volume 2021, page 438. American Medical Informatics Association.
Anjalie Field and Yulia Tsvetkov. 2020. Unsupervised discovery of implicit gender bias. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 596–
608.
Tanmay Garg, Sarah Masud, Tharun Suresh, and Tanmoy Chakraborty. 2022. Handling bias in toxic speech detection: A survey. arXiv preprint arXiv:2202.00126.
Peter Glick and Susan T Fiske. 2001. An ambivalent alliance: Hostile and benevolent sexism as complementary justifications for gender inequality. *American* psychologist, 56(2):109.
Anna P Goddu, Katie J O'Conor, Sophie Lanzkron, Mustapha O Saheed, Somnath Saha, Monica E Peek, Carlton Haywood, and Mary Catherine Beach. 2018.
Do words matter? stigmatizing language and the transmission of bias in the medical record. Journal of general internal medicine, 33(5):685–691.
Alexander R Green, Dana R Carney, Daniel J Pallin, Long H Ngo, Kristal L Raymond, Lisa I Iezzoni, and Mahzarin R Banaji. 2007. Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients. Journal of general internal medicine, 22(9):1231–1238.
William J Hall, Mimi V Chapman, Kent M Lee, Yesenia M Merino, Tainayah W Thomas, B Keith Payne, Eugenia Eng, Steven H Day, and Tamera CoyneBeasley. 2015. Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. *American journal of public health*, 105(12):e60–e76.
Keith Harrigian, Carlos Aguirre, and Mark Dredze.
2020. Do models of mental health based on social media data generalize? In *Findings of the association for computational linguistics: EMNLP 2020*,
pages 3774–3788.
Jennifer Huang Harris, Nomi C Levy-Carrick, and Ashwini Nadkarni. 2022. Opennotes: transparency versus stigma in patient care. *The Lancet Psychiatry*,
9(6):426–428.
Megan Healy, Alison Richard, and Khameer Kidia.
2022. How to reduce stigma and bias in clinical communication: a narrative review. Journal of General Internal Medicine, pages 1–8.
Gracie Himmelstein, David Bates, and Li Zhou.
2022. Examination of stigmatizing language in the electronic health record. *JAMA network open*,
5(1):e2144967–e2144967.
Y-X Ho, CS Gadd, KL Kohorst, and ST Rosenbloom.
2014. A qualitative analysis evaluating the purposes and practices of clinical documentation. *Applied* Clinical Informatics, 5(01):153–168.
Kelly M Hoffman, Sophie Trawalter, Jordan R Axt, and M Norman Oliver. 2016. Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. Proceedings of the National Academy of Sciences, 113(16):4296–4301.
Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 588–602.
Md Saroar Jahan and Mourad Oussalah. 2021. A systematic review of hate speech automatic detection using natural language processing. arXiv preprint arXiv:2106.00742.
Alistair Johnson, Tom Pollard, Steven Horng, Leo Anthony Celi, and Roger Mark. 2023. Mimic-iv-note:
Deidentified free-text clinical notes.
Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H
Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. *Scientific data*, 3(1):1–9.
Divyansh Kaushik, Eduard Hovy, and Zachary Lipton.
2019. Learning the difference that makes a difference with counterfactually-augmented data. In *International Conference on Learning Representations*.
Maciej Kopera, Hubert Suszek, Erin Bonar, Maciej Myszka, Bartłomiej Gmaj, Mark Ilgen, and Marcin Wojnar. 2015. Evaluating explicit and implicit stigma of mental illness in mental health professionals and medical students. *Community mental health journal*,
51(5):628–634.
Sawan Kumar, Sharmistha Jat, Karan Saxena, and Partha Talukdar. 2019. Zero-shot word sense disambiguation using sense definition embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5670–
5681.
Alyssa Lees, Daniel Borkan, Ian Kivlichan, Jorge Nario, and Tesh Goyal. 2021. Capturing covertly toxic speech via crowdsourcing. In Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing, pages 14–20.
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models.
In *International Conference on Machine Learning*,
pages 6565–6576. PMLR.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
Jean M McMahon and Kimberly Barsamian Kahn. 2016.
Benevolent racism? the impact of target race on ambivalent sexism. Group Processes & Intergroup Relations, 19(2):169–183.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality.
Advances in neural information processing systems, 26.
Alan Nelson. 2002. Unequal treatment: confronting racial and ethnic disparities in health care. Journal of the national medical association, 94(8):666.
Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive language detection in online user content. In Proceedings of the 25th international conference on world wide web, pages 145–153.
Jenny Park, Somnath Saha, Brant Chee, Janiece Taylor, and Mary Catherine Beach. 2021. Physician use of stigmatizing language in patient medical records.
JAMA Network Open, 4(7):e2117052–e2117052.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825–2830.
Alan Ramponi and Barbara Plank. 2020. Neural unsupervised domain adaptation in nlp—a survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838–6855.
Julia Raney, Ria Pal, Tiffany Lee, Samuel Ricardo Saenz, Devika Bhushan, Peter Leahy, Carrie Johnson, Cynthia Kapphahn, Michael A Gisondi, and Kim Hoang. 2021. Words matter: an antibias workshop for health care professionals to reduce stigmatizing language. *MedEdPORTAL*, 17:11115.
Hannah Rashkin, Sameer Singh, and Yejin Choi. 2016.
Connotation frames: A data-driven investigation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 311–321.
Radim Reh˚u ˇ ˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/
publication/884893/en.
Kirk Roberts. 2016. Assessing the corpus size vs. similarity trade-off for word embeddings in clinical nlp.
In *Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP)*, pages 54–63.
Nazanin Sabri, Valerio Basile, Tommaso Caselli, et al.
2021. Leveraging bias in pre-trained word embeddings for unsupervised microaggression detection. In CLiC-it.
Bianca Scarlini, Tommaso Pasini, and Roberto Navigli.
2020. Sensembert: Context-enhanced sense embeddings for multilingual word sense disambiguation.
In *Proceedings of the AAAI conference on artificial* intelligence, volume 34, pages 8758–8765.
Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the fifth international workshop on natural language processing for social media, pages 1–10.
Jenelle R Shanley, Deborah Shropshire, and Barbara L
Bonner. 2009. To report or not report: A physician's dilemma. *AMA Journal of Ethics*, 11(2):141–145.
Derald Wing Sue. 2010. Microaggressions in everyday life: Race, gender, and sexual orientation. John Wiley & Sons.
Derald Wing Sue, Christina M Capodilupo, Gina C
Torino, Jennifer M Bucceri, Aisha Holder, Kevin L
Nadal, and Marta Esquilin. 2007. Racial microaggressions in everyday life: implications for clinical practice. *American psychologist*, 62(4):271.
Masashi Sugiyama, Matthias Krauledat, and KlausRobert Müller. 2007. Covariate shift adaptation by importance weighted cross validation. *Journal of* Machine Learning Research, 8(5).
Michael Sun, Tomasz Oliwa, Monica E Peek, and Elizabeth L Tung. 2022. Negative patient descriptors: Documenting racial bias in the electronic health record: Study examines racial bias in the patient descriptors used in the electronic health record. *Health* Affairs, 41(2):203–211.
Anna Valdez. 2021. Words matter: Labelling, bias and stigma in nursing. *Journal of Advanced Nursing*,
77(11):e33–e35.
Michelle Van Ryn, Diana J Burgess, John F Dovidio, Sean M Phelan, Somnath Saha, Jennifer Malat, Joan M Griffin, Steven S Fu, and Sylvia Perry. 2011.
The impact of racism on clinician cognition, behavior, and clinical decision making. *Du Bois review:*
social science research on race, 8(1):199–218.
Tianlu Wang, Rohit Sridhar, Diyi Yang, and Xuezhi Wang. 2022. Identifying and mitigating spurious correlations for improving robustness in nlp models.
In *Findings of the Association for Computational* Linguistics: NAACL 2022, pages 1719–1729.
Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. arXiv preprint arXiv:1705.09899.
Karen Werder, Alexa Curtis, Stephanie Reynolds, and Jason Satterfield. 2022. Addressing bias and stigma in the language we use with persons with opioid use disorder: A narrative review. *Journal of the* American Psychiatric Nurses Association, 28(1):9–
22.
David R Williams, Jourdyn A Lawrence, and Brigette A
Davis. 2019. Racism and health: evidence and needed research. *Annual review of public health*,
40:105–125.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771.
Katherine J Wolsiefer, Matthias Mehl, Gordon B
Moskowitz, Colleen K Cagno, Colin A Zestcott, Alma Tejeda-Padron, and Jeff Stone. 2021. Investigating the relationship between resident physician implicit bias and language use during a clinical encounter with hispanic patients. *Health Communication*, pages 1–9.
Wenjie Yin and Arkaitz Zubiaga. 2021. Towards generalisable hate speech detection: a review on obstacles and solutions. *PeerJ Computer Science*, 7:e598.
Valentina A Zavala, Paige M Bracci, John M Carethers, Luis Carvajal-Carmona, Nicole B Coggins, Marcia R Cruz-Correa, Melissa Davis, Adam J de Smith, Julie Dutil, Jane C Figueiredo, et al. 2021. Cancer health disparities in racial/ethnic minorities in the united states. *British journal of cancer*, 124(2):315–332.
Ciyou Zhu, Richard H Byrd, Peihuang Lu, and Jorge Nocedal. 1997. Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. *ACM Transactions on mathematical software (TOMS)*, 23(4):550–560.
## A Data A.1 Task Taxonomy
We present the task taxonomy developed for this study in Table 4, along with de-identified examples for each of the stigmatizing language classes.
The taxonomy was developed by clinicians on our team, drawing upon previous literature (Beach et al., 2021; Sun et al., 2022). We plan to expand our current anchor n-gram list in future work using context-aware keyword discovery.
## A.2 Anchor & Label Distribution
We provide the distribution of labels for each task in Table 5. This distribution is further broken down by anchor n-gram in Figure 1. Each task contains a subset of anchors with extreme class imbalance.
| Task | Class | JHM | MIMIC |
|---------------|-----------|-------|---------|
| Difficult | 413 | 526 | |
| Credibility & | Disbelief | 438 | 609 |
| Obstinacy | Exclude | 77 | 115 |
| Negative | 1,578 | 893 | |
| Compliance | Neutral | 283 | 439 |
| Positive | 357 | 271 | |
| Exclude | 430 | 496 | |
| Negative | 843 | 1,221 | |
| Descriptors | Neutral | 233 | 96 |
| Positive | 549 | 377 | |
Table 5: Label distribution for each task.
## A.3 Annotator Agreement
Three annotators were responsible for labeling all data used in our study - one clinician C1 and two research assistants R1, R2. We present agreement matrices in Figure 2 for the MIMIC and JHM datasets.
Each instance in the JHM dataset was labeled by at least two annotators, with a subset labeled by three. A subset of instances in the MIMIC dataset were labeled by two annotators, with the remainder labeled by a single annotator. Annotators labeled
| Stigma Type | Class | Definition | Examples |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| Disbelief | Insinuates doubt about a patient's stated testimony. | adamant he doesn't smoke; claims to see a therapist | |
| Describes patient (or patient's family) perspective as | insists on being admitted; adamantly | | |
| inflexible/difficult/entrenched, typically with respect | | | |
| Difficult | opposed to limiting fruit intake | | |
| to their intentions. | | | |
| Credibility & Obstinacy | Word/phrase is not used to characterize the patient | | |
| Exclude | or describe the patient's behavior; may refer to medical condition or treatment or to another person or context. | patient's friend insisted she go to the hospital; test claims submitted to insurance adherence to therapeutic medication is unclear; | |
| Negative | Patient not, unlikely to, or questionably following | mother declines vaccines; struggles with medication | |
| medical advice. | and follow-up compliance | | |
| Not used to describe whether the patient is not following medical advice or rejecting treatment; often used to describe generically some future plan involving a hypothetical. Alternatively, see Exclude (Credibility & Obstinacy). | discussed the medication compliance; school refuses to provide adequate accommodations; feels that her parents' health has declined | | |
| Compliance | Neutral Positive | Patient following medical advice. | continues to be compliant with aspirin regimen; reports excellent adherence |
| Patient's demeanor or behavior is cast in a negative light; insinuates the patients is not being forthright or | | | |
| Negative | transparent; patient may be falsifying symptoms to get something they want. | drug-seeking behavior; concern for secondary gain; unwilling to meet with case manager; unfortunately a poor historian | |
| Negation of negative descriptors; insinuates the | | | |
| Neutral | patient was expected to have a negative demeanor or be difficult to interact with. | his mother is the primary historian; interactive and cooperative; not combative or belligerent; dad seems angry with patient at times | |
| Descriptors | lovely 80 year old woman; well-groomed | | |
| Positive | Patient's demeanor or behavior is described in a | and holds good eye contact; pleasant and | |
| positive light; patient is easy to interact with. | appropriate interaction with staff | | |
| Patient self-description or description of another | | | |
| Exclude | individual. Alternatively, see Exclude (Credibility & Obstinacy). | does not want providers to think she's malingering; reports feeling angry before her period; lives on pleasant avenue downtown | |
the data independently and then met with the larger team to resolve disagreements and discuss ambiguous cases.
Agreement scores prior to resolution were quite high, suggesting 1) the annotation taxonomy was clear and 2) the stigmatizing language we considered was generally not ambiguous in its impact.
We observed similar agreement trends for both datasets; the Descriptors task had the highest agreement, while the Credibility & Obstinacy task had the lowest agreement. The former consists of several highly polar anchor n-grams (e.g., pleasantly, unkempt), while the latter requires a higher degree of personal interpretation.
## A.4 Preprocessing
All clinical free text in our datasets was casenormalized and converted to an ASCII encoding prior to additional processing. The MIMIC dataset was de-identified before we obtained access to it.
The JHM dataset, however, was not subject to any de-identification procedures because it is protected within a secure cloud environment and we are not distributing assets derived from it.
Anchor terms are identified using regular expressions implemented in Python's re package. Up to 10 words to the left and 10 words to the right of the matched spans (based on whitespace) are maintained for annotation and modeling. Context sizes were specified *a priori* based on guidance from our clinical collaborators; future work may consider evaluating the effect this choice has on annotation and modeling outcomes.
For the logistic regression models, we use a custom pipeline to transform the raw text into feature space. The text instances are first tokenized using a clinical domain tokenizer implemented in the medspaCy library (Eyre et al., 2021). Tokens are recursively merged together to form phrases based on the bi-gram scoring function introduced by Mikolov et al. (2013) and implemented in Gensim (Reh˚u ˇ ˇrek and Sojka, 2010). We use a scoring threshold of 10, minimum vocabulary frequency of 5, and recurse twice to identify 1-4 grams.
## B The Role Of Context (§4.1) B.1 Experimental Design
The annotated dataset is split into training, development, and test subsets at a 70/20/10 ratio. Instances are assigned randomly into each subset, using their associated patient identifiers as stratification criteria to limit data leakage. The training and development subsets are further split at random to facilitate 5-fold cross-validation.
## B.2 Models
The Majority Per Anchor baseline outputs the following class probabilities given an input anchor n-gram w:
$$p(y\mid w)={\frac{C(w,y)+\alpha}{\sum_{y^{\prime}\in{\mathcal{Y}}}C(w,y^{\prime})+|{\mathcal{Y}}|\alpha}}$$
where C(*w, y*) is the number of examples with anchor w having class y in the training data, Y is the set of possible classes y, and α is a smoothing hyperparameter. We use α = 1 for all of our experiments.
The logistic regression baselines use scikitlearn (Pedregosa et al., 2011) for data transformations and classifier training. For the TF-IDF
representations, we use an ℓ2 row-wise norm. As a classifier, we use multinomial logistic regression optimized using lbfgs (Zhu et al., 1997). We balance class weights and perform a grid search over the following ℓ2 regularization parameters: 0.01, 0.03, 0.1, 0.3, 1, 3, 5, 10. The model which maximizes macro F1-score in each training split's associated development set is chosen for application on the test set.
We use Hugging Face's transformers library
(Wolf et al., 2019) to initialize all BERT models and fine-tune them using code written in PyTorch (Paszke et al., 2019). We train all models using a batch size of 16, a fixed learning rate of 5e-05, a dropout probability of 0.1, and classbalanced cross-entropy loss. As an optimizer, we use AdamW (Loshchilov and Hutter, 2017). We evaluate the model every 50 updates and save the model which maximizes macro F1-score on the training split's associated development data. Due to compute limitations in our HIPAA-compliant environment (i.e., limited GPU access), we do an initial exploration of the ℓ2 regularization strength on one split of the data for each classification task.
We find the regularization strength to have minimal effect on performance for decay values of 1e-5, 1e4, and 1e-3; we set a decay weight of 1e-5 for all remaining experiments.
Readers should keep in mind that the clinical BERT models (Alsentzer et al., 2019) were pretrained on MIMIC-III (Johnson et al., 2016), which may have a small amount of note and/or patient overlap with our MIMIC-IV discharge summary sample. Despite this potential leakage, the clinical BERT models do not consistently outperform the BERT models pretrained using general web data (Devlin et al., 2018). Understanding whether clinical knowledge is necessary to fully understand stigmatizing language in the context of a medical record is left as an open question for future research. Provided sufficient data privacy protections, we also see opportunities to leverage larger generative models.
All experiments were run in a HIPAA-compliant remote computing environment secured with OSlevel group permissions. We used servers outfitted with NVIDIA Tesla M60 GPUs (2 x 8 GB VRAM)
and Intel Xeon E5-3698 CPUs (2.20 GHz).
## C Demographic Differences In Stigmatizing Language (§4.2) C.1 Experimental Design
We train new clinical BERT models for each of the three classification tasks. This time, we forego cross-validation and instead use a single training, development, and test split. We detach each task's classification head and pass the anchor n-grams through their respective models to extract their internal mean-pooled representation.
Maintaining separation between the three classification tasks, we randomly split the subset of patients whose data was used for training the BERT
models into 5 non-overlapping groups and use these groups as folds for cross-validation. Using 4 of the groups for training, an unregularized logistic regression classifier is fit to independently predict race and sex from the internal semantic representations. We evaluate separation using data from the held-out group. This process is repeated 5 times until each patient group has been used as the held-out test group.
The joint race and sex distribution of instances is
| JHM | Black or | White or Caucasian | Other | | | | |
|------------------|------------|----------------------|-----------|-----------|-----------|---------|---------|
| African American | | | | | | | |
| Task | Class | Female | Male | Female | Male | Female | Male |
| Difficult | 159 (129) | 94 (76) | 87 (60) | 40 (29) | 11 (10) | 17 (13) | |
| Credibility & | Disbelief | 160 (133) | 142 (117) | 59 (46) | 47 (39) | 8 (7) | 18 (17) |
| Obstinacy | Exclude | 20 (18) | 20 (17) | 20 (13) | 11 (9) | 3 (3) | 3 (2) |
| Negative | 714 (499) | 480 (324) | 187 (146) | 104 (81) | 22 (20) | 41 (29) | |
| Compliance | Neutral | 107 (102) | 87 (81) | 43 (37) | 31 (29) | 4 (4) | 4 (3) |
| Positive | 146 (135) | 105 (93) | 50 (45) | 35 (31) | 9 (7) | 6 (5) | |
| Exclude | 146 (132) | 132 (108) | 68 (55) | 58 (56) | 8 (8) | 11 (9) | |
| Negative | 253 (172) | 254 (189) | 134 (72) | 144 (89) | 17 (11) | 34 (18) | |
| Descriptors | Neutral | 78 (69) | 51 (50) | 54 (52) | 32 (29) | 6 (5) | 8 (8) |
| Positive | 232 (185) | 117 (98) | 111 (91) | 59 (48) | 19 (16) | 9 (9) | |
| MIMIC | Black or | White or Caucasian | Other | | | | |
| African American | | | | | | | |
| Task | Class | Female | Male | Female | Male | Female | Male |
| Difficult | 35 (32) | 48 (47) | 177 (167) | 189 (177) | 31 (29) | 32 (31) | |
| Credibility & | Disbelief | 64 (64) | 56 (55) | 209 (198) | 191 (179) | 31 (30) | 41 (41) |
| Obstinacy | Exclude | 13 (13) | 8 (8) | 36 (36) | 43 (43) | 7 (6) | 7 (7) |
| Negative | 127 (121) | 109 (93) | 232 (219) | 277 (258) | 64 (61) | 56 (54) | |
| Compliance | Neutral | 30 (30) | 26 (25) | 146 (140) | 160 (157) | 23 (23) | 35 (33) |
| Positive | 23 (23) | 23 (22) | 81 (79) | 93 (90) | 23 (22) | 21 (19) | |
| Exclude | 50 (49) | 36 (35) | 161 (157) | 171 (162) | 29 (29) | 19 (19) | |
| Negative | 106 (84) | 126 (112) | 341 (309) | 514 (419) | 49 (44) | 49 (46) | |
| Descriptors | Neutral | 4 (4) | 10 (9) | 38 (38) | 29 (29) | 5 (5) | 6 (6) |
| Positive | 33 (33) | 13 (13) | 157 (152) | 105 (104) | 37 (35) | 20 (20) | |
provided in Table 6. Note that we ignore instances in which a patient either declined to report or did not self-report their race or sex. After this exclusion, we are left with 5,129 of the original 5,201 instances for the JHM dataset, and 4,875 of the original 5,043 instances for the MIMIC dataset.
## C.2 Baselines
Our clinical datasets represent a concatenation of notes from different specialties. Each speciality has a unique patient demographic pool and thus invites the possibility of conflating the encoding of specialty-specific knowledge with demographicspecific knowledge. For example, OB-GYN notes come specifically from female patients and our sample of JHM pediatric notes come from a population which is 95% black. Encoding the speciality would naturally allow inference of patient demographics.
Additionally, any differences in prevalence of our anchor n-grams between demographic groups may be exploited by the linear classifier. The latter is expected given the extant literature which highlights demographic disparities in usage of stigmatizing language (Beach and Saha, 2021; Beach et al., 2021).
For these reasons, we ground the predictive performance achieved using the semantic representations against simple logistic regression baselines which model one-hot-encoded representations of the anchor n-gram, clinical speciality, and the primary stigmatizing language classification label. A
qualitative review of instances in both datasets suggest there are likely additional auxiliary attributes not accounted for here (e.g., diagnoses) that would further explain the encoding of race and sex in the embeddings. For the MIMIC dataset, we consider the service which wrote the discharge summary
(e.g., SURG, GYN, PSYCH) to be the speciality.
In Table 7, we include our ability to infer each of these baseline attributes considered within the experiment. The anchor n-grams, task label, and speciality are all predictable from the BERT embeddings, confirming the necessity of the baselines.
Credibility & Obstinacy**JHM MIMIC**
Anchor Label Speciality Sex Race Anchor Label Speciality Sex Race
Majority Baseline 0.03 ± 0.00 0.20 ± 0.02 0.11 ± 0.01 0.37 ± 0.01 0.26 ± 0.02 0.02 ± 0.00 0.22 ± 0.01 0.06 ± 0.01 0.33 ± 0.01 0.27 ± 0.01 Anchor - 0.51 ± 0.05 0.14 ± 0.03 0.50 ± 0.04 0.31 ± 0.05 - 0.51 ± 0.02 0.07 ± 0.01 0.52 ± 0.04 0.27 ± 0.01
Label 0.08 ± 0.01 - 0.11 ± 0.01 0.37 ± 0.01 0.27 ± 0.03 0.09 ± 0.01 - 0.06 ± 0.01 0.51 ± 0.05 0.27 ± 0.01
Speciality 0.07 ± 0.02 0.31 ± 0.02 - 0.44 ± 0.04 0.36 ± 0.04 0.05 ± 0.01 0.32 ± 0.03 - 0.55 ± 0.05 0.28 ± 0.02
Anchor × Label - – 0.18 ± 0.05 0.50 ± 0.03 0.31 ± 0.05 - – 0.07 ± 0.01 0.49 ± 0.04 0.28 ± 0.02
Anchor × Speciality - 0.52 ± 0.06 - 0.51 ± 0.04 0.38 ± 0.03 - 0.60 ± 0.07 - 0.51 ± 0.02 0.28 ± 0.02
Label × Speciality 0.11 ± 0.02 - – 0.47 ± 0.04 0.38 ± 0.04 0.10 ± 0.02 - – 0.54 ± 0.04 0.27 ± 0.01
Anchor × Label × Speciality - – - 0.54 ± 0.01 0.35 ± 0.03 - – - 0.51 ± 0.02 0.29 ± 0.02
Embedding 0.76 ± 0.05 0.95 ± 0.03 0.24 ± 0.03 0.76 ± 0.02 0.34 ± 0.02 0.92 ± 0.02 0.87 ± 0.03 0.11 ± 0.01 0.75 ± 0.02 0.30 ± 0.03
Embedding (Gender Neutral) 0.77 ± 0.06 0.93 ± 0.02 0.25 ± 0.04 0.59 ± 0.02 0.34 ± 0.06 0.92 ± 0.01 0.86 ± 0.06 0.10 ± 0.01 0.49 ± 0.03 0.33 ± 0.02
Compliance**JHM MIMIC**
Anchor Label Speciality Sex Race Anchor Label Speciality Sex Race
Majority Baseline 0.01 ± 0.00 0.28 ± 0.01 0.08 ± 0.00 0.37 ± 0.02 0.29 ± 0.01 0.01 ± 0.00 0.24 ± 0.01 0.05 ± 0.00 0.33 ± 0.01 0.26 ± 0.01 Anchor - 0.59 ± 0.02 0.18 ± 0.04 0.42 ± 0.02 0.29 ± 0.01 - 0.66 ± 0.02 0.05 ± 0.00 0.54 ± 0.02 0.27 ± 0.02
Label 0.03 ± 0.00 - 0.14 ± 0.01 0.37 ± 0.02 0.29 ± 0.01 0.03 ± 0.01 - 0.05 ± 0.00 0.47 ± 0.03 0.26 ± 0.01
Speciality 0.03 ± 0.00 0.28 ± 0.01 - 0.53 ± 0.04 0.29 ± 0.01 0.02 ± 0.01 0.34 ± 0.03 - 0.55 ± 0.02 0.26 ± 0.01
Anchor × Label - – 0.27 ± 0.02 0.46 ± 0.03 0.30 ± 0.01 - – 0.07 ± 0.01 0.52 ± 0.03 0.31 ± 0.02
Anchor × Speciality - 0.62 ± 0.05 - 0.54 ± 0.02 0.35 ± 0.03 - 0.67 ± 0.03 - 0.56 ± 0.02 0.30 ± 0.02
Label × Speciality 0.08 ± 0.01 - – 0.53 ± 0.05 0.32 ± 0.02 0.08 ± 0.02 - – 0.56 ± 0.03 0.28 ± 0.01
Anchor × Label × Speciality - – - 0.54 ± 0.03 0.36 ± 0.02 - – - 0.54 ± 0.01 0.30 ± 0.02 Embedding 0.77 ± 0.04 1.00 ± 0.00 0.38 ± 0.05 0.57 ± 0.01 0.36 ± 0.02 0.86 ± 0.04 1.00 ± 0.00 0.13 ± 0.04 0.56 ± 0.03 0.33 ± 0.02
Embedding (Gender Neutral) 0.74 ± 0.04 1.00 ± 0.00 0.39 ± 0.05 0.52 ± 0.01 0.35 ± 0.01 0.85 ± 0.03 1.00 ± 0.00 0.12 ± 0.02 0.50 ± 0.04 0.34 ± 0.02
Descriptors**JHM MIMIC**
Anchor Label Speciality Sex Race Anchor Label Speciality Sex Race
Majority Baseline 0.01 ± 0.00 0.14 ± 0.01 0.10 ± 0.01 0.35 ± 0.02 0.26 ± 0.01 0.00 ± 0.00 0.18 ± 0.00 0.05 ± 0.00 0.34 ± 0.02 0.28 ± 0.00 Anchor - 0.83 ± 0.03 0.22 ± 0.02 0.50 ± 0.02 0.30 ± 0.03 - 0.87 ± 0.03 0.13 ± 0.01 0.56 ± 0.03 0.28 ± 0.01
Label 0.07 ± 0.00 - 0.10 ± 0.01 0.46 ± 0.07 0.26 ± 0.01 0.03 ± 0.00 - 0.06 ± 0.01 0.58 ± 0.03 0.28 ± 0.00
Speciality 0.01 ± 0.00 0.28 ± 0.03 - 0.58 ± 0.03 0.32 ± 0.03 0.03 ± 0.00 0.27 ± 0.03 - 0.44 ± 0.03 0.28 ± 0.00
Anchor × Label - – 0.30 ± 0.02 0.53 ± 0.02 0.32 ± 0.04 - – 0.13 ± 0.01 0.56 ± 0.02 0.29 ± 0.01
Anchor × Speciality - 0.84 ± 0.03 - 0.56 ± 0.04 0.34 ± 0.02 - 0.86 ± 0.02 - 0.57 ± 0.02 0.31 ± 0.02
Label × Speciality 0.09 ± 0.01 - – 0.58 ± 0.04 0.32 ± 0.03 0.11 ± 0.01 - – 0.57 ± 0.04 0.28 ± 0.00
Anchor × Label × Speciality - – - 0.55 ± 0.03 0.36 ± 0.02 - – - 0.58 ± 0.02 0.30 ± 0.02 Embedding 0.82 ± 0.06 1.00 ± 0.00 0.45 ± 0.02 0.61 ± 0.04 0.34 ± 0.03 0.91 ± 0.02 1.00 ± 0.00 0.24 ± 0.05 0.58 ± 0.02 0.33 ± 0.02
Embedding (Gender Neutral) 0.82 ± 0.07 1.00 ± 0.00 0.44 ± 0.04 0.52 ± 0.03 0.34 ± 0.02 0.90 ± 0.03 1.00 ± 0.00 0.24 ± 0.04 0.54 ± 0.03 0.31 ± 0.02
## C.3 Demographic-Neutral Substitutions
Sex During an initial run of the experiment, we recognized that patient sex could be easily inferred from the semantic representations due to the cues from gender-specific language. We adopt a naive approach to mitigate the presence of overt genderinformative language affecting conclusions within the demographic inference experiments. We replace gendered pronouns (e.g., he, herself), identifiers of sex (e.g., male, Mrs. Smith), and terms with non-uniform gender associations (e.g., husband, wife). The full mapping of substitutions is provided below in Table 8.
There are two limitations with this approach.
First, we do not make substitutions for any patient names in the text. Second, we do not address any grammatical issues that arise after substitution of a gendered word (e.g., "he denies" → "they denies").
In practice, the former implies that true amount of the sex-related information encoded in the learned embeddings may be lower than current estimates suggest. This case would only further strengthen our current conclusions. Regarding the latter, we find that any grammatical inconsistencies do not affect our ability to infer the stigma labels associated with each anchor embedding (Table 7).
Race We briefly explored using rules to obfuscate racial identifiers as well (e.g, "43 y.o. Asian"). We found this procedure difficult to perform automatically (e.g., "wearing black T-shift") and likely to be a low-yield process based on a qualitative review of the instances in both datasets. For this reason, we opted not to include any race-neutral substitutions. Nonetheless, the lack of obfuscation should be noted while interpreting our results.
## D Dataset Differences In Stigmatizing Language (§4.3) D.1 Experimental Design
We use the clinical BERT models trained during the §4.1 experiments to evaluate domain-transfer.
That is, we take the clinical BERT models (with anchor pooling) trained within each cross-validation fold and apply them to the test set of the oppo-
| Original | Replacement |
|-------------------------------------|---------------|
| He, She | They |
| Him, Her | Them |
| His, Hers | Their |
| Himself, Herself | Themselves |
| Male, Female, Girl, Boy, Man, Woman | Person |
| Mr. XX, Ms. XX, Mrs. XX, Miss. XX | Patient |
| Husband, Wife | Partner |
Table 8: Gender-informative words and their associated gender-neutral substitutions.
site dataset (JHM → MIMIC, MIMIC → JHM).
We *do not* modify or otherwise tune the existing models to improve transfer performance, with the primary goal being to understand differences in stigmatizing language usage between datasets (not to optimize generalization). To facilitate our qualitative analysis, we cache all test-set predictions and organize them into four groups based on whether the in-domain (source = target) and out-of-domain
(source ̸= target) models characterized them correctly.
## D.2 Error Distribution
Errors made by both the in-domain and out-ofdomain models are those which appear to be a consequence of task difficulty and model underspecification. Examples include hypothetical statements
(e.g., "if the patient declines") and instances containing both positive and negative sentiment (e.g,
"disinhibited, but charming").
Errors made by the out-of-domain model, but not the in-domain model, are a consequence of distribution shift. The two notable areas of shift include 1) the prevalence of statements regarding individuals other than the patient (e.g., family),
and 2) differences in class priors conditioned on each anchor. The latter is sometimes the result of speciality-specific nuances (e.g., psychiatry notes include more self-descriptions).
Errors made by the in-domain model, but not the out-of-domain model, are generally a consequence of the out-of-domain model having seen more training examples containing the test example's anchor.
![14_image_1.png](14_image_1.png)
![14_image_0.png](14_image_0.png)
NegativeNeutralPositive
![15_image_0.png](15_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
varab-xu-2023-abstractive | Abstractive Summarizers are Excellent Extractive Summarizers | https://aclanthology.org/2023.acl-short.29 | Extractive and abstractive summarization designs have historically been fragmented, limiting the benefits that often arise from compatible model architectures. In this paper, we explore the potential synergies of modeling extractive summarization with an abstractive summarization system and propose three novel inference algorithms using the sequence-to-sequence architecture. We evaluate them on the CNN {\&} Dailymail dataset and show that recent advancements in abstractive system designs enable abstractive systems to not only compete, but even surpass the performance of extractive systems with custom architectures. To our surprise, abstractive systems achieve this without being exposed to extractive oracle summaries and, therefore, for the first time allow a single model to produce both abstractive and extractive summaries. This evidence questions our fundamental understanding of extractive system design, and the necessity for extractive labels while pathing the way for promising research directions in hybrid models. | Abstractive Summarizers are Excellent Extractive Summarizers Daniel Varab Novo Nordisk IT University of Copenhagen [email protected]
## Abstract
In this paper, we explore the efficacy of modeling extractive summarization with an abstractive summarization system. We propose three novel inference algorithms for sequence-tosequence models, evaluate them on established summarization benchmarks, and show that recent advancements in abstractive designs have enabled them to compete directly with extractive systems with custom extractive architectures. We show for the first time that a single model can simultaneously produce both stateof-the-art abstractive and extractive summaries, introducing a unified paradigm for summarization systems. Our results question fundamental concepts of extractive systems and pave the way for a new paradigm - generative modeling for extractive summarization.1
## 1 Introduction
Extractive summarization selects a set of salient sentences from the original document(s) and composes them into a summary. Compared to abstractive summaries, made up of words or phrases that do not appear in the input document, extractive summaries are less flexible but avoid inconsistencies and hallucinations. The pipeline for building an extractive summarizer typically consists of two separate stages: *sentence labeling* and *extractive modeling*. Since few summarization datasets come with gold labels indicating which document sentences are summary-worthy, the first step is to create *oracle* sentence labels (Nallapati et al.,
2017). The task is commonly *modeled* with a sequence labeling architecture (Cheng and Lapata, 2016) where a salience score is estimated for each document sentence, and top-ranked sentences are selected for summary inclusion. Recent work has expanded extractive modeling to higher-order sentence selection to account for complex label 1We distribute the code to replicate the results presented in the paper at https://github.com/danielvarab/GenX.
Yumo Xu
![0_image_0.png](0_image_0.png)
School of Informatics University of Edinburgh [email protected] dependencies, via extracting sentences stepwise
(Narayan et al., 2020), or reranking a small set of summary candidates (Zhong et al., 2020; An et al., 2022).
In this work, we revisit these fundamental concepts in extractive summarization. Specifically, we highlight that heuristically-derived sentence labels can be highly suboptimal (Narayan et al., 2018b; Xu and Lapata, 2022b), and that customized neural architectures for extractive modeling prevent taking advantage of independent improvements.
We recognize that generative modeling with a neural encoder-decoder architecture (Bahdanau et al.,
2015; Sutskever et al., 2014), the *de facto* choice for abstractive summarization (Nallapati et al., 2017; Zhang et al., 2020; Lewis et al., 2020), constitutes a promising direction for extractive summarization. In particular, such models learn directly from abstractive references and do, therefore, not require sentence labeling, while also embodying the extractive capabilities previously enabled by specialized neural architectures. Existing literature 330 has established varied and many connections between abstractive and extractive modeling such as copy mechanism (See et al., 2017), content selection (Kedzie et al., 2018; Gehrmann et al., 2018),
and generation guidance (Dou et al., 2021). These connections, however, are mostly *abstract-centric* which are identified or constructed to improve abstractive summarization. In contrast, there are few studies from an *extract-centric* point of view.
In this work, we propose a new summarization paradigm that unifies extractive and abstractive summarization with generative modeling, *without* compromising abstractive performance. To this end, we treat extractive summarization as an *inference*-time task and explore methods for adapting a pre-trained abstractive system for extractive summarization without further optimization.
We hypothesize that an abstractive system can be used as a summary evaluator for not only abstracts but extracts as well. A model optimized on abstractve references should be able to provide an accurate quality estimation for an extractive candidate summary when conditioned on the input document. A straightforward approach to validate this assumption is to search for the best document extract with an abstractive model for candidate evaluation. However, performing an exhaustive search over a combinatorial space of all eligible summary candidates is computationally intractable. To tackle this challenge, we propose GenX, Generative eXtractive summarization, which introduces a set of inference algorithms
(shown in Figure 1) to reduce the search complexity via various approximations of the entire search space, at either sentence- or summary-level.
Experiments show that GenX achieves competitive or superior performance compared to custom systems developed for extractive summarization on the CNN/DM benchmark without compromising its ability to generate abstracts. Particularly, for one-stage summarization the proposed method shows superior results to custom extractive stateof-the-art systems. GenX also exhibits high robustness in zero-shot transfer: on XSum, its zero-shot performance surprisingly surpasses its fully supervised counterpart. We further conduct an extensive analysis of GenX's properties, providing potential directions for future research on generative modeling for extractive summarization.
## 2 Generative Modeling For Extracts
Given a generative model θ trained on summarization data comprising documents and abstractive references, at inference time, for an input document D and a summary sequence Y , we estimate the length-normalized log probability of Y , following the standard practice in neural text generation
(Cho et al., 2014):
$$p_{\theta}(Y|D)={\frac{1}{|Y|}}{\sum_{t=1}^{|Y|}}\log p_{\theta}(Y_{t}|D,Y_{<t})\quad(1)$$
As θ is optimized at the token level, we evaluate both *complete* and *partial* summaries with pθ(Y |D).
The candidate summary space for a document D = {si}
n i=1 of n sentences is combinatorial, consisting of |C(D)| = C
m n candidate summaries of length m. To sidestep the computational intractability, we introduce three inference algorithms that reduce the search complexity via approximations.
The first two (ranking and reranking) construct a candidate summary set, using either a discriminative or generative model (see Figure 1(a)), while the last approach searches directly over the partial summary candidate space (see Figure 1(b)).
Generative Ranking We employ a pre-trained generative model at both sentence- and summarylevel for hierarchical ranking. Specifically, we input each document sentence s into a generator and evaluate its summary-worthiness independently via its likelihood. We then rank all document sentences, and any subset of size m of the top-k sentences is considered as a candidate summary c. The sequence-to-sequence generator then evaluates and ranks all candidate summaries, and the highestranked one is selected as the extractive hypothesis:
$$y=\operatorname*{argmax}_{c\subseteq{\mathrm{top-k}}\,p_{\theta}(s|D)}p_{\theta}\left(\oplus(c)|D\right)\qquad(2)$$
where ⊕ concatenates the selected document sentences in c, ordered by their rank.
Generative Reranking Instead of using the same generative model for both sentence and summary evaluation, we assume access to an existing discriminative model pϕ(s|D) for sentence evaluation and ranking. Following Zhong et al. (2020), we adopt BERTSUMEXT (Liu and Lapata, 2019) to score each document sentence and then build candidate summaries as the combinations of top-scoring
| Model | R-1 | R-2 | R-L |
|-------------------|-------|-------|-------|
| Lead-3 | 40.42 | 17.62 | 36.67 |
| Oracle | 52.59 | 31.23 | 48.87 |
| One-Stage Systems | | | |
| BERTSumExt | 42.73 | 20.13 | 39.20 |
| RoBERTaSumExt | 42.99 | 20.60 | 39.21 |
| Stepwise ETCSum | 43.84 | 20.80 | 39.77 |
| GenX (Search) | 43.57 | 20.55 | 40.01 |
| Two-Stage Systems | | | |
| BertSumExt+TRB | 43.18 | 20.16 | 39.56 |
| RoBERTaSumExt+TRB | 43.30 | 20.58 | 39.48 |
| MatchSum | 44.41 | 20.86 | 40.55 |
| Posthoc Rank | 39.77 | 18.51 | 36.00 |
| GenX (Rank) | 42.90 | 19.99 | 39.09 |
| GenX (Rerank) | 43.76 | 20.82 | 40.02 |
sentences. In this case, the role of generative modeling is a summary-level reranker pθ(⊕(c)|D).
Generative Search Instead of ranking, we consider constructing a summary by searching directly over the *sentence* space, i.e., without first composing candidate summaries from the input document.
We propose a novel search algorithm that autoregressively selects a sentence until a stopping criterion is satisfied. Specifically, at each search step t, we evaluate and select a sentence as:
$$y_{t}=\operatorname*{argmax}_{s\in D}p_{\theta}(y_{<t}\oplus s|D)$$
pθ(y<t ⊕ s|D) (3)
where ⊕ concatenates the selected sentences y<t and a candidate sentence s. The selected sentence ytis then concatenated with y<t to form the selection history for the next step, as shown in Figure 1(c). We follow common practice in non-autoregressive extractive summarization (Liu and Lapata, 2019; Zhong et al., 2020) and assume a fixed number of sentences in the summary hypothesis, leading to a fixed number of search steps. Narayan et al. (2020) introduced a stepwise model which employs a special stop-token where the search stops when the token is generated.
To explore this we additionally experiment with a dynamic stopping criterion where search over sentences continues until the end of the sequence token, EOS , provides a higher summary likelihood
| Model | R-1 | R-2 | R-L |
|---------------------------|-------|-------|-------|
| BertSumExt (ZS) | 20.54 | 2.93 | 15.55 |
| BertSumExt+TRB (ZS) | 20.62 | 2.95 | 15.62 |
| MatchSum (ZS) | 20.90 | 3.07 | 15.75 |
| GenX (Search; Supervised) | 17.90 | 2.79 | 13.36 |
| GenX (Search; ZS) | 20.94 | 2.96 | 15.92 |
Table 2: Results on XSum test set. We highlight **highest**
scores. ZS denotes zero-shot performance for models trained on CNN/DM while Supervised uses XSum for training.
than adding an additional sentence:
s.t. $\max\limits_{\theta\in D}p_{\theta}(y_{<t}\oplus s|D)>p_{\theta}(y_{<t}\oplus\mbox{EOS}|D)$. (4)
s∈D
## 3 Experimental Setup
We perform supervised experiments on CNN/DM
(Hermann et al., 2015) and zero-shot experiments on XSum (Narayan et al., 2018a). We evaluate summaries with ROUGE (Lin and Hovy, 2003).
Details for our experimental settings and datasets can be found in Appendix A.
As there is no established baseline for extractive summarization with generative modeling, we construct **Posthoc Rank**, a posthoc method for direct comparison with GenX. The baseline first generates an abstract using the abstractive model. Then, the generated abstract is used to query document sentences and m sentences are retrieved with BM25 as the summary while applying tri-gram blocking.
## 4 Results
Supervised Summarization Table 1 shows the results of various systems trained and evaluated on CNN/DM. The first block presents the performance of the Lead-3 baseline which considers the first 3 sentences in a document as the summary and an Oracle baseline which serves as an upper bound.
The second block reports the performance of one-stage summarization systems. Stepwise ECTSum (Narayan et al., 2020) is a state-of-the-art autoregressive system that learns to score partial summaries by selecting which sentence is a summary sentence iteratively. Different from GenX, it is a highly-customized extractive architecture optimized with extractive oracle summaries. As can be seen, GenX performs on par with Stepwise ETCSum, and outperforms BERTSumExt (Liu and Lapata, 2019) and RoBERTaSumExt (Narayan et al.,
2020). Two popular extractive systems based on sequence labeling.
| Model | R-1 | R-2 | R-L |
|------------------|-------|-------|-------|
| GenX (Search) | 43.57 | 20.54 | 40.01 |
| BART | ↓5.11 | ↓4.12 | ↓5.08 |
| Dynamic Stopping | ↓0.11 | ↓0.08 | ↓0.10 |
| Trigram Blocking | ↓0.16 | ↓0.26 | ↓0.18 |
Table 3: Ablation study in CNN/DM test set.
The third block presents the results of two-stage systems. TRB denotes an additional stage for sentence selection with Trigram Blocking, an effective method for reducing redundancy. MatchSum
(Zhong et al., 2020) is a state-of-the-art extractive system that takes top-ranked sentences from BERTSumExt and then re-ranks the summary candidates composed by them with a model based on a Siamese-BERT architecture. As can be seen, GenX models improve over the one-stage BERTSumExt and RoBERTaSumExt, i.e., with or without BERTSumExt as a sentence-level ranker. Its reranking variant also outperforms BERTSumExt+TRB and RoBERTaSumExt+TRB, showing that generative summary-level evaluation is more effective than heuristically-derived selection criteria. Note, the performance of GenX still falls short of state-ofthe-art MatchSum. This is all achieved while the design allows the base generative model to retain its ability to produce abstractive summaries. This is not applicable to any existing extractive systems except Posthoc Rank, which shows significantly inferior performance.
Zero-Shot Summarization We also examine the generalization capability of extractive systems in a *zero-shot* setting.2 As shown in Table 2, GenX
generalizes to a different dataset robustly, outperforming strong one- and two-stage systems. It is generally perceived that a model's zero-shot performance is inferior to the supervised performance.
Surprisingly, GenX performs substantially better in the zero-shot setting than its supervised counterpart. One potential reason for this is that despite the discrepancy between training and inference, CNN/DM is a more extractive dataset than XSum
(Liu and Lapata, 2019), and therefore contains more extract-specific knowledge. Compared to existing systems, GenX is more capable of transferring the extractive ability learned from CNN/DM
to XSum. This shows that treating extractive summarization as an *inference* task can significantly reduce the risk of overfitting to one specific dataset, shedding light on a new direction for knowledge transferring in zero-shot summarization.
## 5 Ablation Study
We further assessed GenX with an ablation study.
Replacing BRIO (trained with MLE and Contrastive Loss) with Bart (trained with MLE) leads to the largest performance drop. With the augmentation of contrastive learning, the abstractive system is competent in the dual role of both a generation and evaluation model, emphasizing the importance of calibrating a generative model on its summarylevel probability, even for its extractive inference.
The dynamic stopping mechanism introduced in Equation (4) performs on par with fixed-step search, showing that learning directly from abstracts is a promising way to teach models *when to stop* for summary extraction. GenX is also shown to be able to search for extractive summaries of less redundancy as its performance can not be further improved by incorporating Trigram Blocking.
## 6 Efficiency
We have shown that abstractive systems are capable extractive summarizers, however, it is important to highlight that the proposed method exhibits different computational requirements than that of contemporary extractive designs. Unlike extractive designs that compute a single score for a candidate sentence or summary (via a classification token), abstractive systems produce scores for all individual tokens in a candidate summary3. Computing these extra tokens causes approaches such as *ranking* and *reranking* with GenX more computationally demanding. However, when combined with search GenX stands as an efficient solution to searching through an otherwise intractable candidate summary space. This is enabled by an abstractive system's ability to sequentially score text
(see Equation 1) and boils down to the complexity of beam search. This is a clear improvement in computational efficiency over systems like MatchSum which only supports scoring complete summaries and must exhaustively recompute different permutations in the candidate summary spaces. To make this strategy computationally tractable these models resort to heavy pruning which limits the 3For the sake of generality we ignore computational costs related to encoding as this varies across models but emphasize that it can have sizable practical implications.
expressiveness that high-order modeling otherwise enables.
## 7 Related Work
There is a plethora of work on controlling different aspects of summarization, from content (Xu and Lapata, 2022a; Ahuja et al., 2022) to formats
(Zhong et al., 2022). In this work, we offer efficient and effective control over the summary type
(extract versus abstract) during inference. Recent work also investigates how to treat discriminative tasks such as information extraction and retrieval with generative modeling and its effectiveness for entities (De Cao et al., 2020) and string identifiers
(Bevilacqua et al., 2022). Others have suggested delegating extractive inference to the encoder of a generative model (An et al., 2022). Despite the resemblances, extractive summarization with generative modeling remains under-explored and stands as a promising research direction with the surge of innovations in large language models.
## 8 Conclusion
In this paper, we explored the possibility of modeling extractive summarization with an abstractive system. We proposed three novel inference algorithms which allow an abstractive model to perform the extractive task. Our results showed that not only is extractive summarization feasible, but recent systems are directly competitive with contemporary extractive systems. This work shows that extractive and abstractive paradigms can be unified through a sequence-to-sequence design, removing the need for oracle summary labels and custom extractive model architectures.
## 9 Limitations
One potential way to improve the extractive performance of a generative system is to explicitly model the likelihood of *extracts* during training.
Driven by this intuition, we investigate creating a mixture of extractive and abstractive candidates for contrastive learning in BRIO. Specifically, we obtain extractive candidates with beam labeling proposed in Xu and Lapata (2022b), while the abstractive ones are from the original BRIO training data. Nevertheless, as we can see, this mixing method hurts both BRIO's extractive and abstractive performance. However, it is noteworthy that extractive summary is important in a wider context, as shown in Section 4: reference summaries in CNN/DM are highly extractive and optimizing a model on these summaries therefore may have provided it with the task instruction needed for extractive summarization, albeit implicitly. We leave the study of a more effective extract-aware learning strategy for future study.
Furthermore, we emphasize that the conclusions drawn in this paper are based on results produced on English datasets from the news domain. Even though these datasets are established benchmark datasets for summarization it is imaginable that other domains and languages may have produced different evidence. Despite this, the results remain insightful as the results show that extractive summarization is in fact feasible with modern abstractive systems. In future research, we look forward to shedding light on the possibilities and limitations of the proposed methods in a broader context.
## References
Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, and Greg Durrett. 2022. ASPECTNEWS:
Aspect-oriented summarization of news documents.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 6494–6506, Dublin, Ireland.
Association for Computational Linguistics.
Chenxin An, Ming Zhong, Zhiyong Wu, Qin Zhu, Xuanjing Huang, and Xipeng Qiu. 2022. CoLo: A
contrastive learning based re-ranking framework for one-stage summarization. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5783–5793, Gyeongju, Republic of Korea.
International Committee on Computational Linguistics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *Proceedings of the* 3rd International Conference on Learning Representations, San Diego, CA, USA.
Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Scott Yih, Sebastian Riedel, and Fabio Petroni. 2022.
Autoregressive search engines: Generating substrings as document identifiers. In *Advances in Neural Information Processing Systems*.
Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 484–494, Berlin, Germany.
Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties
of neural machine translation: Encoder–decoder approaches. In *Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical* Translation, pages 103–111, Doha, Qatar. Association for Computational Linguistics.
Anthony Christopher Davison and David Victor Hinkley.
1997. *Bootstrap methods and their application*. 1.
Cambridge university press.
Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval.
In *International Conference on Learning Representations*.
Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2021. GSum: A general framework for guided neural abstractive summarization. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4830–4842, Online. Association for Computational Linguistics.
Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics.
Karl Moritz Hermann, Tomáš Kociský, Edward Grefen- ˇ
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *Proceedings of the 28th International Conference on Neural Information Processing* Systems, pages 1693—-1701, Cambridge, MA, USA.
Chris Kedzie, Kathleen McKeown, and Hal Daumé III.
2018. Content selection in deep learning models of summarization. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 1818–1828, Brussels, Belgium.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 71–78, Edmonton, Canada.
Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International
Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3730–3740, Hong Kong, China. Association for Computational Linguistics.
Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing order to abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics.
Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017.
Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In *Proceedings of the 31st AAAI Conference on Artificial Intelligence*, pages 3075–3081, San Francisco, California, USA.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018a. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018b. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759, New Orleans, Louisiana.
Shashi Narayan, Joshua Maynez, Jakub Adamek, Daniele Pighin, Blaz Bratanic, and Ryan McDonald. 2020. Stepwise extractive summarization and planning with structured transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4143–4159, Online. Association for Computational Linguistics.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc.
Yumo Xu and Mirella Lapata. 2022a. Document summarization with latent queries. *Transactions of the* Association for Computational Linguistics, 10:623–
638.
Yumo Xu and Mirella Lapata. 2022b. Text summarization with oracle expectation. arXiv preprint arXiv:2209.12714.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328–11339.
Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197–6208, Online.
Association for Computational Linguistics.
Ming Zhong, Yang Liu, Suyu Ge, Yuning Mao, Yizhu Jiao, Xingxing Zhang, Yichong Xu, Chenguang Zhu, Michael Zeng, and Jiawei Han. 2022. Unsupervised summarization with customized granularities. arXiv preprint arXiv:2201.12502.
A Implementation Details
We show detailed data statistics in Table 4. For our GenX experiments, we use the BRIO system (Liu et al., 2022) as our underlying abstractive model. To replicate the BRIO system we run the published code repository associated with the paper. Specifically, we initialize a BART
model with the Huggingface Models Hub checkpoint facebook/bart-large-cnn and fine-tune it with the provided configuration using the training scheme presented in the paper on both the CNN/Dailymail, and XSum dataset using the data distributed in said repository. We train the model with full precision on a single machine with four Tesla V100 GPUs for 30 hours and choose the checkpoint with the lowest cross-entropy (generative) loss term on a held-out validation set. Interestingly choosing the checkpoint with the lowest contrastive term produces poor results. Also, using mixed precision training doesn't appear to work.
To run the inference algorithms we initialize a BART system with different weights, either obtained through the above training procedure (BRIO)
or the baseline facebook/bart-large-cnn checkpoint. The hyperparameter m is identical to the desired length of the generated summary. m was tuned on the validation set and set to 3 for the CNN/DM dataset, and 2 for the XSum. k was set to 5, following MatchSum system. We studied the effects of various length penalties in Equation 1 and did not find our approach sensitive to its choice and omitted it from the equation. For this computation we run the model under fp16 mixed precision to save memory, however, casting the model entirely to half-precision for inference does not appear to work.
| Datasets | CNN/DM | XSum |
|-----------------------|-------------------|--------|
| Language | En | En |
| Domain | Newswire Newswire | |
| #Train | 287,084 | 203,02 |
| #Validation | 13,367 | 11,273 |
| #Test | 11,489 | 11,332 |
| #Sentences in Extract | 3 | 2 |
Table 4: Data statistics for extractive summarization.
We used standard parameter settings for all experiments: ROUGE-1.5.5.pl -c 95 -m -r 1000 -n 2
-a.
## B License Information
The datasets used in this work, CNN/DM (Hermann et al., 2015) and XSum (Narayan et al.,
2018a), are both released under the MIT License.
## C System Output
Document: We spend a third of our lives asleep, but most of us don't pay attention to what our mind and body actually need during these resting hours in order to feel refreshed every day. The Sleep Health Foundation have released a study reporting that 30 percent of Australians complain about their lack of sleep on a daily basis. According to Chair Professor David Hillman, those misplaced hours of sleep must be paid back in order to be functional for the entire week. A study has outlined that 30 percent of Australians complain about their lack of sleep on a daily basis. The average adult needs around eight hours of sleep per night with a range of seven to nine. The average amount of sleep for an adult is around eight hours, with a range of seven to nine, the ABC have reported. Any less than six hours or any more than 10 hours is unusual for the standard person. Professor Hillman added that our sleep pattern is influenced by how much we are willing to compromise from the work week. 'A lot of us pay back a bit of that debt on the weekend but I think it's possible to exist in a sort of tolerable, sleep-restricted state,' he said. 'In other words you're not optimal, but you're still functional.' Pushing these sleep-debt boundaries can lead to micro sleeps in certain people. Therefore, the hours must be paid back to avoid an error rate in alertness tasks. Any less than six or any more than ten hours is unusual for the standard person. If power napping, it is important to get no more than 20 minutes or inertia will set in. In relation to a sleep schedule, Professor Hillman said the eight hours per night does not necessarily need to be consecutive. 'Interestingly enough, your slow wave sleep, is in the first four hours,' he said. 'Most adults, the most convenient way our particular society is organised is to have your eight hours in a continuous block overnight but that's not a necessary thing.' If choosing to break up your eight hours of sleep, napping throughout the day is the answer. Professor Hillman advises 20 minute power naps to avoid falling into deep sleep and suffering from inertia which makes you feel temporarily worse off. 'The longer naps, you get the sleep inertia but ultimately once you've got up, they sustain you better,' he said. Professor Hillman has also advised that if you are waking up tired and fatigued it could be due to sleep apnoea which is often associated with snoring.
Reference Summary: The Sleep Foundation study has shown that adults need 8 hours of sleep.
According to the study, 30 percent of Australians say they lack sleep daily. Professor David Hillman said it's important to pay back our sleep debts. He also says sleep can be broken up as long as you get the first 4 hours. Power naps should not be longer than 20 minutes or inertia will set in.
BertSumExt: The Sleep Health Foundation have released a study reporting that 30 percent of Australians complain about their lack of sleep on a daily basis. The average adult needs around eight hours of sleep per night with a range of seven to nine. Any less than six hours or any more than 10 hours is unusual for the standard person.
MatchSum: The Sleep Health Foundation have released a study reporting that 30 percent of Australians complain about their lack of sleep on a daily basis. The average adult needs around eight hours of sleep per night with a range of seven to nine.
GenX (Search): A study has outlined that 30 percent of Australians complain about their lack of sleep on a daily basis. The average adult needs around eight hours of sleep per night with a range of seven to nine. According to Chair Professor David Hillman, those misplaced hours of sleep must be paid back in order to be functional for the entire week.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 9.
✗ A2. Did you discuss any potential risks of your work?
Section 9.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 4.
✓ B1. Did you cite the creators of artifacts you used?
Section 3-4 and Appendix A-B.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix B.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix A.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We used existing benchmarks as they are for fair comparisons.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix A.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A.
## C ✓ **Did You Run Computational Experiments?** Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
thakur-etal-2023-language | Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions | https://aclanthology.org/2023.acl-short.30 | Societal biases present in pre-trained large language models are a critical issue as these models have been shown to propagate biases in countless downstream applications, rendering them unfair towards specific groups of people. Since large-scale retraining of these models from scratch is both time and compute-expensive, a variety of approaches have been previously proposed that de-bias a pre-trained model. While the majority of current state-of-the-art debiasing methods focus on changes to the training regime, in this paper, we propose data intervention strategies as a powerful yet simple technique to reduce gender bias in pre-trained models. Specifically, we empirically show that by fine-tuning a pre-trained model on only 10 debiased (intervened) training examples, the tendency to favor any gender is significantly reduced. Since our proposed method only needs a few training examples, we argue that our few-shot de-biasing approach is highly feasible and practical. Through extensive experimentation, we show that our de-biasing technique performs better than competitive state-of-the-art baselines with minimal loss in language modeling ability. | # Language Models Get A Gender Makeover: Mitigating Gender Bias With Few-Shot Data Interventions
Himanshu Thakur Atishay Jain∗ **Praneetha Vaddamanu**∗
Paul Pu Liang Louis-Philippe Morency Carnegie Mellon University
{hthakur,atishayj,pvaddama,pliang,morency}@andrew.cmu.edu
## Abstract
Caution: this paper contains potentially offensive or upsetting model outputs.
Societal biases present in pre-trained large language models are a critical issue as these models have been shown to propagate biases in countless downstream applications, rendering them unfair towards specific groups of people. Since large-scale retraining of these models from scratch is both time and computeexpensive, a variety of approaches have been previously proposed that de-bias a pre-trained model. While the majority of current state-ofthe-art debiasing methods focus on changes to the training regime, in this paper, we propose data intervention strategies as a powerful yet simple technique to reduce gender bias in pretrained models. Specifically, we empirically show that by fine-tuning a pre-trained model on only 10 de-biased (intervened) training examples, the tendency to favor any gender is significantly reduced. Since our proposed method only needs a few training examples, our fewshot debiasing approach is highly feasible and practical. Through extensive experimentation, we show that our debiasing technique performs better than competitive state-of-the-art baselines with minimal loss in language modeling ability.
## 1 Introduction
Recently, there has been a surge of interest in pretrained large language models (LLM) in natural language processing (NLP). It has been shown that the pre-training + finetuning of a model drastically improves its performance on downstream tasks as the knowledge captured by the pre-training on a large corpus is transferred to the downstream application when finetuning the model. However, this also leads to societal biases like gender bias that were implicitly learned by the pre-trained models being transferred to crucial downstream applications like job recommendation engines (Zhao et al., 2019;
∗ Equal Contribution Barocas et al., 2017; Kurita et al., 2019). Analyzing and mitigating bias without requiring significant re-training or compute resources is crucial to the widespread adoption of LLMs in downstream applications.
Previous work (Nadeem et al., 2021), (Nangia et al., 2020a), (Cer et al., 2018) has attempted to quantify bias, and others such as Ravfogel et al.
(2020) and Liang et al. (2021) have attempted to remove it algorithmically from the models. Closer to our work are data-manipulative techniques such as Zmigrod et al. (2019) and Maudslay et al. (2019)
that modify the dataset and further fine-tune the model. In this paper, we propose simple data intervention strategies and show that they can mitigate gender bias in pre-trained models with the help of few-shot fine-tuning. Moreover, taking inspiration from Schick et al. (2021), we find that by utilizing a biased pre-trained LLM for mining for most gender-biased samples in a dataset, our methods can mitigate gender bias with very few training samples.
Finally, we perform an extensive evaluation of our debiasing technique on two recent bias benchmarks
(Nadeem et al., 2021) and show that our method outperforms three existing state-of-the-art techniques and performs comparably to the other two. Our main contributions are the following:
- We propose simple data intervention techniques that can be used to reduce gender bias in a pre-trained LLM with few training examples (few-shot), thus making human-in-theloop bias mitigation strategies feasible.
- We introduce a novel data sampling technique that utilises LLMs to mine for the most biased samples from a dataset and can benefit existing state-of-the-art debiasing methods. When used for debiasing a model, these few samples serve as exemplars and induce large reductions in gender bias.
![1_image_0.png](1_image_0.png)
## 2 Related Work
In recent years, there has been growing concern about the bias/stereotypical discriminatory behavior by NLP models, particularly concerning gender. Several studies have investigated the presence of gender bias in various NLP tasks and proposed methods for mitigating it.
One line of research has focused on analyzing the extent of gender bias in pre-trained language models such as BERT and GPT-2. These studies have found that these models exhibit a significant amount of gender bias in their word embeddings for BERT (Jentzsch and Turan, 2022) and for GPT2 (Kirk et al., 2021) and are prone to making stereotypical gender-based predictions (e.g., assuming that a doctor is male and a nurse is female). A standard evaluation metric used in this line of research is Stereotype metrics such as StereoSet (Nadeem et al., 2021), which evaluates the model's ability to predict gender stereotypes and CrowS pairs (Nangia et al., 2020b) which measure whether a model generally prefers more stereotypical sentences. A
similar line of work is gender bias tests proposed in BIG-bench (Srivastava et al., 2022). The tests assess the language model's gender biases, stereotypes, and ability to infer gender information. It evaluates gender bias and stereotype between male and female, and gender minority bias and stereotype between majority and minority. It also examines the model's language modeling performance, which can be affected during de-biasing.
Another line of research has proposed methods for debiasing these models. These methods can be broadly categorized into two groups: **data-based**
and **algorithm-based**. Data-based methods aim to reduce bias by removing or altering biased words from the training set. In contrast, algorithm-based methods aim to modify the model's architecture or training procedure to reduce bias. One popular databased method is "uncertainty sampling" (Lewis and Gale, 1994), where the model is trained on the instances that it is most uncertain about, which can help to reduce bias by forcing the model to learn from a diverse set of examples. A popular algorithmbased method is "Adversarial Debiasing" proposed by Zhang et al. (2018), which fine-tunes the model using an adversarial loss to make it less sensitive to sensitive attributes such as gender. OSCar proposed by Dev et al. (2021), is another algorithm based method that utilizes the idea of disentangling "problematic concepts" like occupation and gender relationship instead of removing them altogether.
MABEL (He et al., 2022) has both algorithm and data-based components, as it first augments the training data by swapping gender words and then applies a contrastive learning objective and alignment via entailment pairs. Their data augmentation strategy is similar in spirit to the data intervention techniques we propose, however our analysis does not require training auxiliary models and uses significantly lesser data.
Data-based methods include the "Equalization" technique proposed by Bolukbasi et al. (2016),
which aims to equalize the representation of genderspecific words in the embedding space, the "Counterfactual Data Augmentation" (CDA) method proposed by Zimmermann and Hoffmann (2022),
which generates counterfactual examples to improve the model's robustness to bias, and "NameBased Counterfactual Data Substitution" proposed by Maudslay et al. (2019) which reduces gender bias by replacing gender-informative names in the dataset with gender-neutral names. Our proposed method is also a data-based method, which aims to effectively reduce gender bias by taking inspiration from different techniques such as uncertainty sampling and name-based counterfactual data substitution (Maudslay et al., 2019).
341
## 3 **Probing Bias In Large Language Models**
Pre-trained LLMs are biased towards different genders, as seen in a simple mask-fill experiment using BERT. (Here, and in the rest of the paper, we assume a binary treatment of gender for simplicity.) The task is then to mask out the gender-related nouns and pronouns (such as he, she, her, woman, etc.) and get BERT to predict the masked words for the affected sequences in the dataset. Here, we consider a fixed list of gender-specific words curated from previous work (Lu et al., 2018; Zmigrod et al., 2019) and neutral words list1. We finally compute the "total confidence difference" as the sum of differences in the model's prediction confidence for each gender-word pair (such as confidence of predicting he − she, man − woman, etc.).
Formally, we define total confidence difference as |PN
i=0(f(x
(i)
female) − f(x
(i)
male))| where f(x) represent the confidence of model's prediction, N is the total number of tokens in the dataset and x is the tokenized gender word. The higher this number, the more biased the model is concluded to be. We compute the metric at token level and ensure that each of the gender word gets tokenized into exactly one token by initially extending the tokenizer with our gender word list. The top 3 biased gender-word pairs in StereoSet are shown in Table 1. Intuitively, our technique for gauging bias in LLMs is sensitive to the fixed word list used to represent the sensitive attributes (here, gender). In Table 2, we show the number of words covered by the word list used for both WikiText-2 and StereoSet datasets.
## 4 Data Interventions
In order to reduce gender bias in pre-trained models, we carefully select diverse and hard-biased examples and then replace gender words with more neu1https://github.com/joelparkerhenderson/
inclusive-language tral or equality-focused phrases. This is achieved by using a wordlist to find gender terms in sentences and then segregating words as name and non-name words.
We call our initial approach naive-masking as it does not require a word list for mapping gender words to gender-neutral words. Instead, it replaces all gender words with the fixed word "person." In our next approach, neutral-masking, we swap words in a slightly more semantically accurate manner. In this, we use a word-pair list that goes from gender words to gender-neutral words. With both approaches, we intend to introduce new words in a model's vocabulary to make it more likely to choose a more neutral word in gender-biased sentences.
In our final approach, we exploit the existing vocabulary of the model and try to balance the confidence of prediction on opposite-gender words by using phrases instead. Thus, we call our final approach random-phrase-masking as we instead substitute words with phrases that reflect the equality of gender. This approach not only reduces gender bias but also preserves the original meaning of the sentence in most cases. In our approach, we chose the phrases and order of gender words at random with equal probability.
| Mean | | |
|-----------------------|-----------|-------|
| Confidence Difference | | |
| Mean | Std. Dev. | |
| he, she | 0.317 | 0.288 |
| Will, May | 0.316 | 0.225 |
| boy, girl | 0.219 | 0.218 |
| Gender-Word Pairs | | |
| Dataset | Samples | Affected Words (mean) |
|------------|-----------|-------------------------|
| 10 | 191 | |
| WikiText-2 | 50 | 627 |
| 100 | 1028 | |
| 10 | 55 | |
| StereoSet | 50 | 227 |
| 100 | 463 | |
| Intervention | Input word | Converted word |
|-----------------------|--------------------|------------------|
| he | person | |
| naive-masking | she | person |
| boy | person | |
| he | they | |
| neutral-masking | her | their |
| schoolgirl | schoolkid | |
| he | he or she | |
| random-phrase-masking | she | she and he |
| boy | either girl or boy | |
Table 3: Example conversions for three methods. In Random Phrase Masking, the phrase is being chosen and it's order was random. 342 Additionally, we hypothesize that the choice of the dataset for fine-tuning is also essential. We choose two datasets: the WikiText-2 (Merity et al.,
2017) dataset, which has implicit gender bias since its sources from Wikipedia articles, and the StereoSet dataset (Nadeem et al., 2021), which has explicit/more gender bias as it has been designed to evaluate gender bias. WikiText-22 has 600 train articles and roughly 2M tokens while StereoSet3(dev)
has 2123 samples out of which we only consider 800 samples which are not unrelated. Naturally, our data intervention method should work better on a dataset with training examples with gender bias while being devoid of meaningful gender associations like "She needs a gynecologist," where the gender of the person is important. By testing our method on both datasets, we can understand the sensitivity of our approach to the quality of training samples used.
## 5 Bias Evaluation Metrics
We focus on evaluating the bias of a model while also measuring its language modeling capability.
The ideal model would not just be one with the least bias but also one which does not compromise its language modeling performance. The dual estimation of bias and performance of a model was proposed in the StereoSet benchmark (Nadeem et al., 2021),
with the Language Modeling Score (LMS) measuring the percentage of times a meaningful token is predicted for the mask as opposed to a meaningless token, the Stereotype Score (SS) measuring the percentage of times the model predicted a stereotypical word as compared to an anti-stereotypical word, and an idealized CAT score (ICAT) combining the LMS
and SS score into a single metric. An ideal model has an ICAT score of 100, while the worst biased model has an ICAT score of 0. We additionally evaluate the CrowS-Pairs benchmark (Nangia et al.,
2020a), which captures data with greater diversity in both the stereotypes expressed and the structure of sentences (50 is ideal). However, we note that the Crow-S benchmark is much more limited compared to StereoSet (Nadeem et al., 2021) in terms of both the volume and variety of linguistic phenomenon relating to gender bias it covers.
## 6 Experiments
We compare our proposed interventions with five baselines, 4 of which are state-of-the-art methods and the original pre-trained model. Our first baseline is the application of dropouts to neural networks, **Dropout** proposed by (Webster et al., 2020).
Next, we consider an algorithmic de-biasing technique **INLP** technique proposed by (Ravfogel et al.,
2020). Then, we consider a sentence embedding de-biasing approach **SentenceDebias** (Liang et al.,
2020). Finally, we consider a data-based approach CDA (Zmigrod et al., 2019) that is closest to our work. For a fairer comparison, we run the baselines with the same size (100) of the training set as our method. For all of our experiments, we consider the "bert-base-uncased" pre-trained model available from HuggingFace. For fine-tuning our model, we select a varying number of most-biased training samples (10, 50, and 100) from the WikiText-2 and StereoSet (we only use the dev set) datasets, as discussed in section 4. We also compare this to a random selection of data points as an ablation study.
On the selected dataset, we apply our interventions and obtain the modified dataset, which is then used to fine-tune our pre-trained model using masked language modeling (MLM) loss. The key point is that we only fine-tune the model on the gender words conditioned on the remaining text, significantly reducing the fine-tuning time. We perform ablations on various types of interventions as discussed in Table 7. The model is trained for 30 epochs, with a learning rate of 0.001 and AdamW optimizer. We ran all of our experiments on NVIDIA Tesla T4 GPU on Google Colab for roughly 48 hours. For all experiments, we report the numbers as the mean and standard deviations (6) of 3 different runs. Our experiment code can be found here.4
## 7 Results
Table 4 shows the StereoSet and Crow-S scores for our baselines and our best-performing interventions on the WikiText-2 Dataset. In the StereoSet benchmark, we observe that random-phrase-masking obtains lower SS than all other baselines. On the Crow-S benchmark, random-phrase-masking does better than thre of the baselines except SentenceDebias which achieves slightly better scores.
While random-phrase-masking results in lower SS scores than neutral-masking, it also obtained 4https://github.com/himansh005/data_debias
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
![4_image_2.png](4_image_2.png)
StereoSet Scrores
Type Method SS (↓) LMS (↑) ICAT (↑)
Crow-S
Scores (↓)
None 60.279 84.172 70.300 57.250 CDA 60.022 83.466 70.892 56.107
Dropout 60.529 83.811 70.171 55.977
SentenceDebias 59.221 **84.166** 71.308 **53.817**
![4_image_3.png](4_image_3.png)
INLP 58.205 83.391 70.966 55.727
random-phrase-masking (10) 59.442 80.312 70.406 54.580
random-phrase-masking **58.037** 78.676 69.949 54.457
neutral-masking (10) 60.341 83.956 72.548 55.535
neutral-masking 60.814 83.587 **72.213** 56.490
very low LMS scores. We attribute this performance degradation to the blunt substitution of phrases that our method uses, which might lead to odd-sounding sentences. In the CrowS benchmarks, we see similar behavior and find that random-phrase-masking does better than neutral-masking. Since we believe that our method is sensitive to the choice of the dataset, we also present results on the StereoSet (dev)
dataset 6. In Figure 2, we perform a qualitative analysis of our proposed approach and find that random-phrase-masking is able to flip the predictions on fill-mask tasks for stereotypical sentences.
## 8 Conclusion
In this paper, we show that simple data interventions on limited training data effectively reduce gender bias in LLMs. We also show that a biased pretrained LLM can be used to mine the most effective de-biasing training examples. Evaluation of our methods on state-of-the-art bias benchmarks empirically suggests that our methods effectively reduce gender bias. Given that our methods can work in a few-shot manner and do not require any auxiliary model training, we hope that our work benefits further research in the domain of human-in-the-loop bias mitigation techniques by making the creation of bias mitigation datasets feasible.
## 9 Limitations
Our proposed method has the following main limitations which we believe are important directions for future work to address:
1. **Gender dependency:** Our approach does not account for sentences that only make sense for a single gender. For example, sentences like
"She needs to see a gynecologist" would not be captured by our method. This is a common problem encountered by most debiasing algorithms as it is difficult to distinguish these.
2. **Finite wordlist:** The wordlist does not contain all gender-based words as the language con344
tinues to evolve. We believe that future works could employ better approaches that can automatically mine gender words relevant to a dataset.
3. **Blunt substitution:** The phrase substitution method is an improvement over direct word substitution, but there are still plenty of instances where the new sentence might be semantically incorrect. This does not have any major implication on inference as we are only doing few-shot learning, but it should not be extended to the entire dataset.
4. **Binary gender:** The method only focuses on the male and female gender. It does not consider non-binary or gender-neutral pronouns such as "ze/hir." This can be solved by using an updated wordlist, but the authors could not come across one at the time of writing.
5. **Downstream analyses:** While our work proposes methods that show reduced gender bias as per a set of metrics, the work in no way claims to reduce gender bias in general, especially on downstream tasks. However, we strongly believe that this technique holds potential to reduce gender bias on downstream tasks as well since we adopt a regular finetuning approach and focus mainly on better data interventions. Moreover, recent research has shown that fine-tuning-based debiasing approaches do not damage a model's internal representations to a critical extent (Meade et al.,
2022).
Overall, these limitations suggest that our approach may not be suitable for use in contexts where gender-specific or non-binary language is prevalent, and the underlying wordlist should be frequently updated.
## 10 Ethics Statement
This study was conducted in accordance with ethical principles and guidelines. The study was designed to provide beneficial knowledge and not harm any group or individual. We recognize that the wordlist we use might not represent all contexts of gender bias and that our debiasing method does not cover all contexts of occurrences of gender bias. However, we made sure to consider the ethical implications of our methodologies and the results of our analysis. The authors have tried to ensure the method does not amplify any other inherent bias but also acknowledge that our approach may have limitations.
We take responsibility for any ethical concerns that may arise as a result of our research.
## Acknowledgments
This material is based upon work partially supported by the National Science Foundation (Awards
\#1722822 and \#1750439) and National Institutes of Health (Awards \#R01MH125740, \#R01MH096951, and \#U01MH116925). PPL is partially supported by a Facebook PhD Fellowship and a Carnegie Mellon University's Center for Machine Learning and Health Fellowship. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, NIH, Facebook, or CMLH, and no official endorsement should be inferred. Additionally, we express our appreciation to the anonymous reviewers for their insightful suggestions, which greatly improved our work. Furthermore, we would like to acknowledge the contributions of our colleagues, Atishay Jain and Praneetha Vaddamanu, who played a significant role in the development of this research.
## References
Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: Allocative versus representational harms in machine learning.
In *9th Annual conference of the special interest group* for computing, information and society.
Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016.
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In *Advances in* Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4349–4357.
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics.
Sunipa Dev, Tao Li, Jeff M Phillips, and Vivek Srikumar.
2021. OSCaR: Orthogonal subspace correction and rectification of biases in word embeddings. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5034–
345
5050, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Jacqueline He, Mengzhou Xia, Christiane Fellbaum, and Danqi Chen. 2022. Mabel: Attenuating gender bias using textual entailment data. *ArXiv preprint*,
abs/2210.14975.
Sophie Jentzsch and Cigdem Turan. 2022. Gender bias in BERT - measuring and analysing biases through sentiment rating in a realistic downstream classification task. In *Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)*,
pages 184–199, Seattle, Washington. Association for Computational Linguistics.
Hannah Rose Kirk, Yennie Jun, Filippo Volpin, Haider Iqbal, Elias Benussi, Frédéric A. Dreyer, Aleksandar Shtedritski, and Yuki M. Asano. 2021. Bias out-ofthe-box: An empirical analysis of intersectional occupational biases in popular generative language models.
In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information* Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 2611–2624.
Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In *Proceedings of the* First Workshop on Gender Bias in Natural Language Processing, pages 166–172, Florence, Italy. Association for Computational Linguistics.
David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In *SIGIR '94*,
pages 3–12, London. Springer London.
Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and LouisPhilippe Morency. 2020. Towards debiasing sentence representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5502–5515, Online. Association for Computational Linguistics.
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 6565–6576. PMLR.
Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gender bias in neural natural language processing.
Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mitigating gender bias with name-based counterfactual data substitution. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5267–5275, Hong Kong, China. Association for Computational Linguistics.
Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy.
2022. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1878–1898, Dublin, Ireland.
Association for Computational Linguistics.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In *5th International Conference on Learning* Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Moin Nadeem, Anna Bethke, and Siva Reddy. 2021.
StereoSet: Measuring stereotypical bias in pretrained language models. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics.
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020a. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics.
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020b. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics.
Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7237–7256, Online. Association for Computational Linguistics.
Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021.
Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in NLP. *Transactions of the* Association for Computational Linguistics, 9:1408–
1424.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint* arXiv:2206.04615.
Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models.
346 Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell.
2018. Mitigating unwanted biases with adversarial learning. *ArXiv preprint*, abs/1801.07593.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics.
Victor Zimmermann and Maja Hoffmann. 2022. Absinth: A small world approach to word sense induction. In *Proceedings of the 18th Conference on Natural Language Processing (KONVENS 2022)*, pages 121–128, Potsdam, Germany. KONVENS 2022 Organizers.
Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1651–1661, Florence, Italy. Association for Computational Linguistics.
## A Appendix A.1 Dataset Bias Analysis
To gauge the feasibility of using a wordlist based intervention approach, we first analyze our datasets for occurrences of gender words. As shown in the word cloud 4, gender pronouns are the mostfrequent word in our datasets. Moreover, as per Figure 1, "she," "he," and "her" are the top three most frequently occurring words in our dataset. This suggests that we can definitely detect gender words in our corpus and apply our interventions.
![7_image_2.png](7_image_2.png)
![7_image_0.png](7_image_0.png)
## A.2 Sensitivity To Choice Of Dataset
To understand the effectiveness of our proposed data-interventions, we study apply our methods to two datasets under varying number of training samples (10, 50 and 100) and selection strategies (most biased first and random) as per Table 6. Our methods obtain better results on StereoSet (dev) dataset.
One reason this could happen is due to the fact that StereoSet has explicit gender bias, thus it would be less likely for a sentence like "She needs a gynaecologist" to appear on it. Because our interventions perform blunt substitutions, this sentence might become incorrect due to our method - "Either he or she needs a gynaecologist".
## A.3 **Sensitivity To Number Of Training Samples** And Sampling Strategy
![7_image_1.png](7_image_1.png)
As per Figure 5, When we vary the number of training samples, we observe that the difference in performance is not huge when we transition from 10 to 100 samples, thus suggesting that our method 347
| Mean | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|
| Sentences | Confidence Difference |
| She rushed to see what he wanted and said she loved him. She punched him in the face and told him to go away. | 6.85 |
| Jessica is a new mommy. | |
| Jessica finds being a mother does not come easy to her. | 6.34 |
| She will no longer work so she can stay home and take care of her child. The little girl missed her mommy. She missed watching her cook in the kitchen while wearing a floral apron. | 4.70 |
| She was never home because she worked long hours in the oil field. | |
Table 5: Sentences from StereoSet with maximum difference in confidence of prediction between opposite gender words.
| Number of | Crow-S Pair Score | StereoSet Scores | Perplexity | | | | | |
|------------------------|---------------------|--------------------|----------------|------------------|----------------------|-------------------------------|-------------------------------|-------------------|
| Dataset | Sampling Method | Samples | Total | Stereotype Score | Anti-stereotype Scoe | SS (gender) | LMS (gender) | ICAT (gender) |
| most-biased | 10 | 54.481 (2.583) | 50.408 (5.295) | 60.991 (3.854) | 58.736 (1.215) | 80.858 (2.988) | 66.708 (2.584) | 50.449 (54.983) |
| random | 10 | 55.47 (3.247) | 50.527 (3.632) | 63.107 (4.234) | 58.952 (0.859) | 80.226 (2.85) | 65.862 (2.655) | 86.024 (107.709) |
| most-biased | 50 | 52.994 (1.894) | 47.567 (4.564) | 61.428 (5.25) | 58.498 (1.19) | 80.255 (2.428) | 66.595 (2.207) | 29.599 (28.648) |
| random | 50 | 53.817 (1.011) | 50.107 (2.972) | 59.547 (3.925) | 58.485 (0.758) | 79.158 (1.992) | 65.707 (0.886) | 62.498 (11.593) |
| most-biased | 100 | 53.054 (2.402) | 49.063 (6.025) | 59.291 (4.663) | 58.071 (1.158) | 81.086 (3.226) | 67.972 (2.671) | 19.079 (14.095) |
| random | 100 | 53.563 (1.801) | 48.113 (6.499) | 62.137 (5.405) | 57.719 (1.94) | 79.038 (1.406) | 66.805 (2.074) | 34.826 (12.109) |
| StereoSet | most-biased | 10 | 55.6 (3.06) | 54.668 (5.606) | 57.118 (1.671) | 59.344 (0.742) | 84.624 (2.134) 68.811 (2.176) | 87.06 (80.998) |
| random | 10 | 56.617 (1.344) | 57.983 (1.305) | 54.693 (2.019) | 60.616 (0.72) | 85.076 (0.896) 67.021 (1.895) | 59.901 (102.019) | |
| WikiText-2 most-biased | 50 | 54.276 (1.513) | 53.394 (3.847) | 55.834 (2.652) | 59.238 (1.068) | 83.348 (3.003) | 67.902 (0.977) | 212.365 (155.526) |
| random | 50 | 54.2 (2.383) | 51.783 (5.272) | 57.93 (2.969) | 59.611 (1.155) | 83.456 (1.9) | 67.386 (0.74) | 116.872 (100.401) |
| most-biased | 100 | 55.473 (1.42) | 54.827 (4.255) | 56.637 (4.329) | 59.426 (1.719) | 83.442 (3.185) | 67.629 (1.178) | 220.957 (207.243) |
| random | 100 | 54.457 (1.444) | 51.363 (4.283) | 59.223 (4.451) | 59.545 (0.387) | 81.953 (1.442) | 66.3 (0.597) | 326.017 (181.822) |
Table 6: StereoSet and Crow-S scores for random-phrase-masking method on two datasets, 3 sample sizes and 2 selection methods. We report mean (standard deviation) across 3 different runs. Selecting most-biased samples and using the StereoSet dataset for fine-tuning gives best results.
| Mask Method | Non-Name Word | | | | | | |
|------------------------|--------------------------------------|-----------------------|----------------|-------------------------------|----------------|----------------|----------------|
| Name Word | Mask Method | Crow-S Pairs | StereoSet | Perplexity | | | |
| Total | Stereotype Score | Anti-Stereotype Score | SS | LMS | ICAT | | |
| female-first | | | | | | | |
| random-phrase-masking | 53.18 (3.106) | 49.06 (6.627) | 59.55 (3.678) | 58.283 (0.4) | 79.059 (0.436) | 65.96 (0.27) | 26.3 (9.545) |
| naive-masking | 50.637 (0.585) | 43.607 (1.449) | 61.49 (1.486) | 59.521 (0.458) | 83.325 (0.62) | 67.456 (0.414) | 1.0 (0.0) |
| naive-masking | random-phrase-masking 52.673 (1.374) | 49.057 (5.998) | 58.253 (5.906) | 58.05 (0.851) | 78.218 (0.633) | 65.618 (0.937) | 30.045 (8.019) |
| neutral-masking | female-first | | | | | | |
| random-phrase-masking | 53.44 (0.0) | 53.46 (0.891) | 53.4 (1.372) | 58.246 (0.285) 87.182 (0.391) | 72.806 (0.823) | 11.39 (6.649) | |
| random-phrase-masking | 54.195 (1.619) | 48.43 (0.891) | 63.11 (2.744) | 57.316 (0.164) | 78.339 (0.196) | 66.877 (0.424) | 54.413 (0.212) |
| fixed-phrase-masking-1 | 53.307 (1.761) | 46.837 (8.494) | 63.43 (8.807) | 57.688 (1.718) | 79.554 (0.17) | 67.32 (2.64) | 14.484 (1.512) |
| fixed-phrase-masking-2 | 51.783 (4.965) | 46.43 (10.381) | 60.193 (3.503) | 57.229 (1.739) | 80.551 (1.251) | 68.879 (1.882) | 13.374 (1.174) |
| fixed-phrase-masking-3 | 52.927 (1.541) | 48.317 (3.78) | 60.193 (4.234) | 56.963 (1.373) | 79.3 (1.531) | 68.284 (3.478) | 15.546 (2.997) |
| fixed-phrase-masking-4 | 53.567 (4.186) | 50.083 (9.006) | 59.223 (3.885) | 58.13 (1.208) | 79.834 (0.533) | 66.86 (2.309) | 14.51 (1.339) |
Table 7: StereoSet Gender SS scores on StereoSet (dev) dataset on 100 samples across various interventions techniques.
All numbers are reported as mean and standard deviation across 3 runs.
is capable of few-shot fine-tuning. Moreover, sampling the most biased data points helps our methods achieve better performance consistently, as shown in Figure 5 and Table 6. Table ?? shows some top three most gender biased entries found in the StereoSet dataset.
## A.4 Ablations Of Interventions
We study the effects of choosing different ways of replacement for name and non-name words. In addition to our three interventions proposed previously, we also experimented with a couple of others. In female-first-random-phrase-masking, we always keep the female gendered word before a male word. We wanted to understand if the order of gender words encountered by a model renders any effect on the debiasing. In Table 7, we see that it does not perform any better than random-phrase-masking. Then, we also try fixing the phrases from random-phrase-masking, thus making it fixed-phrase-masking. We obtain 4 variants of this method corresponding to the following four phrases:
1. both [1] and [2]
2. [1] and [2] 3. [1] or [2]
4. either [1] or [2]
Here, [1] and [2] are substituted with opposite gender words. As we observe in Table 7, fixed-phrase-masking-3 obtains the lowest StereoSet Gender SS score out of all our intervention methods. Similarily, naive-masking obtains the lowest Crow-S pair score.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
9
✓ A2. Did you discuss any potential risks of your work?
10
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4
✓ B1. Did you cite the creators of artifacts you used?
4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The datasets used are open-source.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
4
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The data is a general vocabulary. Hence, none of the used data contains any personal information.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
The dataset pages have all the needed information
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
7
## C ✓ **Did You Run Computational Experiments?** 6
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
6 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 6
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
6 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
chi-etal-2023-plue | {PLUE}: Language Understanding Evaluation Benchmark for Privacy Policies in {E}nglish | https://aclanthology.org/2023.acl-short.31 | Privacy policies provide individuals with information about their rights and how their personal information is handled. Natural language understanding (NLU) technologies can support individuals and practitioners to understand better privacy practices described in lengthy and complex documents. However, existing efforts that use NLU technologies are limited by processing the language in a way exclusive to a single task focusing on certain privacy practices. To this end, we introduce the Privacy Policy Language Understanding Evaluation (PLUE) benchmark, a multi-task benchmark for evaluating the privacy policy language understanding across various tasks. We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training. We evaluate several generic pre-trained language models and continue pre-training them on the collected corpus. We demonstrate that domain-specific continual pre-training offers performance improvements across all tasks. The code and models are released at \url{https://github.com/JFChi/PLUE}. |
## Plue: Language Understanding Evaluation Benchmark For Privacy Policies In English
Jianfeng Chi1,2 Wasi Uddin Ahmad3∗ Yuan Tian3 **Kai-Wei Chang**3 1Meta AI, 2University of Virginia, 3University of California, Los Angeles [email protected],{wasiahmad,yuant,kwchang}@ucla.edu
## Abstract
Privacy policies provide individuals with information about their rights and how their personal information is handled. Natural language understanding (NLU) technologies can support individuals and practitioners to understand better privacy practices described in lengthy and complex documents. However, existing efforts that use NLU technologies are limited by processing the language in a way exclusive to a single task focusing on certain privacy practices. To this end, we introduce the Privacy Policy Language Understanding Evaluation (PLUE) benchmark, a multi-task benchmark for evaluating the privacy policy language understanding across various tasks. We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training. We evaluate several generic pre-trained language models and continue pre-training them on the collected corpus.
We demonstrate that domain-specific continual pre-training offers performance improvements across all tasks. The code and models are released at https://github.com/JFChi/PLUE.
## 1 **Introduction**
Privacy policies are documents that outline how a company or organization collects, uses, shares, and protects individuals' personal information. Without a clear understanding of privacy policies, individuals may not know how their personal information is being used or who it is being shared with. The privacy violation might cause potential harm to them.
However, privacy policies are lengthy and complex, prohibiting users from reading and understanding them in detail (Commission et al., 2012; Gluck et al., 2016; Marotta-Wurgler, 2015).
Various natural language understanding (NLU)
technologies have recently been developed to understand privacy policies (Wilson et al., 2016a; Harkous et al., 2018; Ravichander et al., 2019;
∗Work done while at UCLA.
Ahmad et al., 2020; Parvez et al., 2022; Ahmad et al., 2021; Bui et al., 2021). These tasks focus on understanding specific privacy practices at different syntax or semantics levels and require significant effort for data annotations (e.g., domain experts). It is hard to develop generic pre-trained language models (e.g., BERT (Devlin et al., 2019))
with task-specific fine-tuning using limited annotated data. Besides, the unique characteristics of privacy policies, such as reasoning over ambiguity and vagueness, modality, and document structure
(Ravichander et al., 2021), make it challenging to directly apply generic pre-trained language models to the privacy policy domain.
To address these problems and encourage research to develop NLU technologies in the privacy policy domain, we introduce the Privacy Policy Language Understanding Evaluation (PLUE)
benchmark, to evaluate the privacy policy language understanding across six tasks, including text classification, question answering, semantic parsing, and named-entity recognition. PLUE also includes a pre-training privacy policy corpus that we crawl from the websites to enable privacy policy domainspecific language model pre-training. We use this corpus to pre-train BERT (Devlin et al., 2019),
RoBERTa (Liu et al., 2019), Electra (Clark et al.,
2020), and SpanBERT (Joshi et al., 2020) and finetune them on the downstream tasks. We demonstrate that domain-specific continual pre-training offers performance improvements across all tasks.
We will release the benchmark to assist natural language processing (NLP) researchers and practitioners in future exploration.
## 2 **Policy Language Understanding** Evaluation (Plue) Benchmark
PLUE is centered on six English privacy policy language understanding tasks. The datasets and tasks are selected based on the following principles: (1)
usefulness: the selected tasks can help practitioners 352
| Dataset | Task | Sub-domain | |Policy| | |Train| | |Dev| | |Test| | Metric |
|-----------------------------------------------------|-----------------------|----------------------|------------|-----------|---------|----------|------------|
| OPP-115 | Classification | Websites | 115 | 2,771 | 395 | 625 | F1 |
| APP-350 | Classification | Mobile Apps | 350 | 10,150 | 2,817 | 2,540 | F1 |
| PrivacyQA | QA | Mobile Apps | 35 | 1,350 | - | 400 | P / R / F1 |
| PolicyQA | QA | Mobile Apps | 115 | 17,056 | 3,809 | 4,152 | F1 / EM |
| PolicyIE | Intent Classification | Websites Mobile Apps | 31 | 4,209 | - | 1,041 | F1 / EM |
| Slot Filling | | | | | | | |
| PI-Extract | NER | Websites | 30 | 3,034 | - | 1,028 | F1 |
| Table 1: Statistics of the PLUE datasets and tasks. | | | | | | | |
in the domain quickly understand privacy practices without reading the whole privacy policy; (2) task diversity: the selected tasks focus on different semantic levels, e.g., words (phrases), sentences, and paragraphs; (3) task difficulty: the selected tasks should be adequately challenging for more room for improvement; (4) training efficiency: all tasks can be trainable on a single moderate GPU (e.g.,
GeForce GTX 1080 Ti) for no more than ten hours;
(5) accessibility: all datasets are publicly available under licenses that allow usage and redistribution for research purposes.
## 2.1 **Datasets And Tasks**
PLUE includes six tasks in four categories. Table 1 presents an overview of the datasets and tasks within PLUE, and Table 4 in the Appendix gives an example for each task.
OPP-115 Wilson et al. (2016a) presented 115 Online Privacy Policies (OPP-115). The dataset comprises website privacy policies with text segments annotated with one or more privacy practices from ten categories (see Appendix A.1). We train a multi-label classifier to predict the privacy practices given a sentence from a policy document.
APP-350 Zimmeck et al. (2019) presented APP350, a collection of mobile application privacy policies annotating what types of users' data mobile applications collect or share. Like OPP-115, each text segment in a policy document is annotated with zero or more privacy practices (listed in Appendix A.2). In total, there are 30 data-type-related classes in APP-350, and we assign one more class, No_Mention, to those text segments that do not pertain to such practices.
PrivacyQA Ravichander et al. (2019) proposed a question-answering dataset, PrivacyQA, comprised of 35 mobile application privacy policies. Given a question from a mobile application user and a sentence from a privacy policy, the task is to predict whether the sentence is relevant to the question.
PrivacyQA includes unanswerable and subjective questions and formulates the QA task as a binary sentence classification task.
PolicyQA Ahmad et al. (2020) proposed a reading comprehension (Rajpurkar et al., 2016) style dataset, PolicyQA. The dataset is derived from OPP-115 annotations that include a set of finegrained attributes and evidence text spans that support the annotations. Considering the annotated spans as the answer spans, PolicyQA generates diverse questions relating to the corresponding privacy practices and attributes. The task is to predict the answer text span given the corresponding text segment and question.
PolicyIE Ahmad et al. (2021) proposed a semantic parsing dataset composed of two tasks: intent classification and slot filling. Given a sentence in a privacy policy, the task is to predict the sentence's intent (i.e., privacy practice) and identify the semantic concepts associated with the privacy practice.
Based on the role of the slots in privacy practices, PolicyIE groups them into type-I and type-II slots.
In total, there are four intent labels and 14 type-I and four type-II slot labels. We individually train a text classifier and sequence taggers to perform intent classification and slot filling, respectively.
PI-Extract Bui et al. (2021) presented PI-Extract, a named-entity recognition (NER) dataset. It aims to identify what types of user data are (not) collected or shared mentioned in the privacy policies.
It contains 4 types of named entities: COLLECT, NOT_COLLECT, SHARE and NOT_SHARE. Note that the named entities of different types may overlap.
Thus, we report the results for collection-related and share-related entities, respectively.
## 2.2 **Pre-Training Corpus Collection**
The existing pre-trained language models (PLMs)
mostly use data from BooksCorpus (Zhu et al.,
2015) and English Wikipedia. Language models pre-trained on text from those sources might not perform well on the downstream privacy policy language understanding tasks, as privacy policies are composed of text written by domain experts (e.g.,
lawyers). Gururangan et al. (2020) suggested that adapting to the domain's unlabeled data (domainadaptive pre-training) improves the performance of domain-specific tasks. Therefore, we collect a large privacy policy corpus for language model pre-training. In order to achieve broad coverage across privacy practices written in privacy policies
(William, 2020; Ahmad et al., 2021), we collect the privacy policies from two sources: mobile application privacy policies and website privacy policies.
Appendix B provides more details about how we collect these two types of privacy policies.
## 2.3 **Models & Training**
Baselines We benchmark pre-trained language models (PLMs), BERT (Devlin et al., 2019),
RoBERTa (Liu et al., 2019), SpanBERT (Joshi et al., 2020), Electra (Clark et al., 2020), and LEGAL-BERT (Chalkidis et al., 2020). We present the details of the PLMs in Appendix C.
Domain-specific Continual Pre-training In order to adapt PLMs to the privacy policy domain, we continue to train BERT, Electra, SpanBERT,
and RoBERTa on the pre-training corpus described in Section 2.2. We refer to them as PP-BERT, PPRoBERTa, PP-SpanBERT, and PP-Electra, respectively.1 We present details in Appendix D.1.
Task-specific Fine-tuning We fine-tune PLMs for each PLUE task. We *only* tune the learning rate for each task, as we found in the preliminary experiments that model performances are highly sensitive to the learning rate. We present more details in Appendix D.2.
## 3 **Experiment Results**
Tables 2 and 3 present the results for all the experiment models for PLUE tasks. Rows 2-9 show the results of the base PLMs and their corresponding variants with privacy policy domain-specific continual pre-training. Similar to GLUE (Wang et al., 2019), we also provide the average scores of all PLUE tasks in the last column of Table 3.
We observe that the language models (PP-BERT,
1We continually pre-train only the base models to mitigate the environmental impact of our experiments, but our code supports continual pre-training of large PLMs too.
PP-SpanBERT, PP-Electra, PP-RoBERTa) adapted to the privacy policy domain outperform the general language models consistently in all the tasks, and PP-RoBERTa performs the best among all base models in terms of the average scores of all PLUE
tasks. In particular, PP-RoBERTa performs the best for OPP-115, APP-350, PrivacyQA,2and PIExtract, among all base models. PP-BERT and PP-RoBERTa perform the best for PolicyQA; PPElectra and PP-RoBERTa achieve the best performance for PolicyIE. In contrast, LEGAL-BERT
(row 10) performs comparably or shows moderate improvements over BERT, indicating that pretraining on the general legal corpus does not necessarily help privacy policy language understanding.
It is interesting to see that continual pre-training of the language models using the privacy policy domain data benefits them differently. For example, in the text classification tasks (i.e., OPP-115 and APP-350), the performance difference between SpanBERT and PP-SpanBERT are most significant, while models using MLM (BERT and RoBERTa)
already shows relatively high performance before continual pre-training and continual pre-training brings moderate gains to BERT and RoBERTa.
We further investigate the improvement of large variants of PLMs over base variants of PLMs on PLUE tasks. Since PP-RoBERTaBASE performs the best among all base models, we also continue pretrain RoBERTaLARGE (PP-RoBERTaLARGE). As shown in the last five rows in Tables 2 and 3, the large pre-trained language models mostly outperform their base counterparts. Noticeably, PPRoBERTaLARGE is the best-performing model in APP-350, PolicyQA, PI-Extract, and sub-tasks in PolicyIE, and it also achieves the highest average scores of all PLUE tasks among all models.
Lastly, even though domain-specific pre-training and large PLMs help boost the performance for all tasks, the performance of some tasks and datasets
(e.g., APP-350, PrivacyQA, slot filling in PolicyIE)
remains low, which indicates much potential for further work on NLP for the privacy policy domain.
## 4 **Related Work**
Privacy Policy Benchmarks The Usable Privacy Policy Project (Sadeh et al., 2013) is the most significant effort to date, resulting in a large pool of works (Wilson et al., 2016a,b; Sathyendra et al.,
2Ravichander et al. (2019) reported 39.8% F1 score for BERT model; however, we are able to achieve 36.3%.
| Models | |Model| | OPP-115 | APP-350 | PrivacyQA | PolicyQA | PI-Extract |
|-----------------|-----------|------------|-----------|--------------------|-------------|--------------|
| F1 | F1 | P / R / F1 | F1 / EM | F1 | | |
| Human | - | - | - | 68.8 / 69.0 / 68.9 | - | - |
| BERTBASE | 110M | 75.3 | 59.6 | 44.6 / 35.9 / 36.3 | 55.1 / 27.7 | 63.7 / 54.6 |
| ElectraBASE | 110M | 74.0 | 49.3 | 42.7 / 36.0 / 36.1 | 57.5 / 29.9 | 69.4 / 57.8 |
| SpanBERTBASE | 110M | 62.8 | 32.8 | 24.8 / 24.8 / 24.8 | 55.2 / 27.8 | 66.9 / 41.0 |
| RoBERTaBASE | 124M | 79.0 | 67.1 | 43.6 / 36.4 / 36.7 | 56.6 / 29.4 | 70.7 / 56.8 |
| PP-BERTBASE | 110M | 78.0 | 62.8 | 44.8 / 36.9 / 37.7 | 58.3 / 30.0 | 70.5 / 55.3 |
| PP-ElectraBASE | 110M | 73.1 | 57.1 | 48.3 / 38.8 / 39.3 | 58.0 / 30.0 | 70.3 / 61.2 |
| PP-SpanBERTBASE | 110M | 78.1 | 61.9 | 43.4 / 36.4 / 36.8 | 55.8 / 27.5 | 65.5 / 50.8 |
| PP-RoBERTaBASE | 124M | 80.2 | 69.5 | 49.8 / 40.1 / 40.9 | 57.8 / 30.3 | 71.2 / 61.3 |
| LEGAL-BERTBASE | 110M | 76.0 | 57.4 | 45.6 / 37.6 / 38.2 | 55.1 / 27.7 | 69.1 / 51.1 |
| BERTLARGE | 340M | 79.3 | 71.2 | 43.8 / 35.4 / 36.1 | 56.6 / 28.7 | 68.1 / 54.8 |
| ElectraLARGE | 340M | 78.7 | 41.5 | 46.6 / 42.1 / 40.5 | 60.7 / 33.2 | 70.1 / 59.5 |
| SpanBERTLARGE | 340M | 79.4 | 66.0 | 45.2 / 36.5 / 37.3 | 58.2 / 30.8 | 68.2 / 50.8 |
| RoBERTaLARGE | 355M | 79.9 | 72.4 | 47.6 / 41.4 / 40.6 | 59.8 / 32.5 | 70.9 / 62.8 |
| PP-RoBERTaLARGE | 355M | 79.8 | 74.5 | 49.3 / 39.5 / 40.4 | 61.1 / 33.2 | 71.6 / 66.9 |
2016; Mysore Sathyendra et al., 2017; Bhatia and Breaux, 2015; Bhatia et al., 2016; Hosseini et al.,
2016; Zimmeck et al., 2019) to facilitate the automation of privacy policy analysis. A wide range of NLP techniques have been explored accordingly
(Liu et al., 2014; Ramanath et al., 2014; Wilson et al., 2016a; Harkous et al., 2018; Zimmeck et al.,
2019; Shvartzshanider et al., 2018; Harkous et al.,
2018; Ravichander et al., 2019; Ahmad et al., 2020; Bui et al., 2021; Ahmad et al., 2021).
| Intent | Slot Filling | | | | | | |
|-----------------|----------------|----------------|--------------|---------------|------|------|------|
| Models | |Model| | Classification | Type-I Slots | Type-II Slots | | | |
| F1 | F1 | EM | F1 | EM | Avg | | |
| Human | - | 96.5 | 84.3 | 56.6 | 62.3 | 55.6 | - |
| BERTBASE | 110M | 73.7 | 55.2 | 19.7 | 34.7 | 29.8 | 48.2 |
| ElectraBASE | 110M | 73.7 | 56.4 | 22.8 | 36.5 | 30.7 | 49.1 |
| SpanBERTBASE | 110M | 71.9 | 44.0 | 10.8 | 29.7 | 17.5 | 44.2 |
| RoBERTaBASE | 110M | 74.5 | 56.8 | 22.0 | 39.2 | 32.0 | 50.0 |
| PP-BERTBASE | 110M | 76.9 | 56.7 | 22.8 | 38.7 | 32.5 | 50.7 |
| PP-ElectraBASE | 110M | 77.1 | 58.2 | 24.1 | 37.8 | 32.9 | 50.8 |
| PP-SpanBERTBASE | 110M | 75.0 | 54.1 | 19.8 | 33.6 | 26.7 | 48.4 |
| PP-RoBERTaBASE | 110M | 78.1 | 58.0 | 22.4 | 40.1 | 32.4 | 52.3 |
| LEGAL-BERTBASE | 110M | 72.6 | 53.8 | 19.5 | 36.1 | 29.7 | 48.6 |
| BERTLARGE | 340M | 75.5 | 56.8 | 23.0 | 38.4 | 32.2 | 50.0 |
| ElectraLARGE | 340M | 75.6 | 57.9 | 24.0 | 39.6 | 32.4 | 50.2 |
| SpanBERTLARGE | 340M | 73.8 | 45.5 | 9.5 | 38.8 | 29.8 | 48.2 |
| RoBERTaLARGE | 355M | 77.6 | 58.4 | 22.9 | 41.4 | 32.7 | 52.9 |
| PP-RoBERTaLARGE | 355M | 77.7 | 59.8 | 23.9 | 42.0 | 32.3 | 53.7 |
Pre-trained Language Models In the last few years, NLP research has witnessed a radical change with the advent of PLMs like ELMo (Peters et al.,
2018) and BERT (Devlin et al., 2019). PLMs achieved state-of-the-art results in many language understanding benchmarks. Consequently, PLMs have been developed for a wide range of domains, e.g., scientific (Beltagy et al., 2019), medical (Lee et al., 2020; Rasmy et al., 2021; Alsentzer et al.,
2019), legal (Chalkidis et al., 2020), and cybersecurity (Ranade et al., 2021; Bayer et al., 2022). This work investigates the adaptation of PLMs to facilitate NLP research in the privacy policy domain.
## 5 **Conclusion And Future Work**
Reliable aggregation of datasets and benchmarking foundation models on them facilitate future research. This work presents PLUE, a benchmark for training and evaluating new security and privacy policy models. PLUE will help researchers benchmark policy language understanding under a unified setup and facilitate reliable comparison.
PLUE also presents some challenges in language understanding evaluation for privacy policies. For example, the imbalance data issue for privacy practices is a major challenge in the PrivacyQA task (Parvez et al., 2022). Data efficiency is also a challenge for continual pre-training as the amount of unlabeled data is also small for this domain. Approaches such as (Qin et al., 2022) could be investigated to continually adapt LMs for the emerging data in this domain.
## Limitations
The pre-training privacy policy corpus and the downstream task datasets are unlikely to contain toxic or biased content. Therefore, they should not magnify toxicity or bias in the pre-trained and fine-tuned models, although the models may exhibit such behavior due to their original pretraining. The pre-training and benchmark datasets are formed based on privacy policies crawled in the past; as a result, they could be outdated by now.
This work focuses on the English language only, and the findings may not apply to other languages.
## Ethics Statement
License The OPP-115 and APP-350 datasets are made available for research, teaching, and scholarship purposes only, with further parameters in the spirit of a Creative Commons AttributionNonCommercial License (CC BY-NC). The PolicyQA and PI-Extract datasets are derived from OPP-115 datasets. The PrivacyQA and PolicyIE datasets are released under an MIT license. The pre-training corpus, MAPS Policies Dataset, is released under CC BY-NC. We strictly adhere to these licenses and will release the PLUE benchmark resources under CC BY-NC-SA 4.0.
Carbon Footprint We only use RoBERTa large models for continual training on the privacy policy domain to reduce the environmental impacts of training large models. The PP-BERT, PPSpanBERT, PP-Electra, and PP-RoBERTa models were trained for 100k steps on Tesla V100 GPUs that took 1-2 days. Therefore, the training would emit only 9kg of carbon into the environment.3 All fine-tuning experiments were very lightweight due to the small size of the datasets, resulting in approximately 12kg of carbon emission.
## Acknowledgements
We thank the anonymous reviewers for their insightful comments. This work was supported in part by National Science Foundation Grant OAC 2002985, OAC 1920462, and CNS 1943100, Google Research Award, CISCO Research Award, and Meta Research Award. Any opinions, findings, conclusions, or recommendations expressed herein are those of the authors and do not necessarily reflect those of the US Government or NSF.
## References
Wasi Ahmad, Jianfeng Chi, Tu Le, Thomas Norton, Yuan Tian, and Kai-Wei Chang. 2021. Intent classification and slot filling for privacy policies. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 4402–4417, Online.
Association for Computational Linguistics.
Wasi Ahmad, Jianfeng Chi, Yuan Tian, and Kai-Wei Chang. 2020. PolicyQA: A reading comprehension dataset for privacy policies. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 743–749, Online. Association for Computational Linguistics.
Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72–78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Ryan Amos, Gunes Acar, Eli Lucherini, Mihir Kshirsagar, Arvind Narayanan, and Jonathan Mayer. 2021.
Privacy policies over time: Curation and analysis of a million-document dataset. In *Proceedings of the* Web Conference 2021, WWW '21, page 2165–2176, New York, NY, USA. Association for Computing Machinery.
3Calculated using https://mlco2.github.io/impact, based on a total of 100 hours of training on Tesla V100 and Amazon Web Services as the provider.
Markus Bayer, Philipp Kuehn, Ramin Shanehsaz, and Christian Reuter. 2022. Cysecbert: A domainadapted language model for the cybersecurity domain.
arXiv preprint arXiv:2212.02974.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615–
3620, Hong Kong, China. Association for Computational Linguistics.
Jaspreet Bhatia and Travis D Breaux. 2015. Towards an information type lexicon for privacy policies. In 2015 IEEE eighth international workshop on requirements engineering and law (RELAW), pages 19–24. IEEE.
Jaspreet Bhatia, Morgan C Evans, Sudarshan Wadkar, and Travis D Breaux. 2016. Automated extraction of regulated information types using hyponymy relations. In *2016 IEEE 24th International Requirements* Engineering Conference Workshops (REW), pages 19–25. IEEE.
Duc Bui, Kang G. Shin, Jong-Min Choi, and Junbum Shin. 2021. Automated extraction and presentation of data practices in privacy policies. *Proceedings on* Privacy Enhancing Technologies, 2021(2):88–110.
Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos.
2020. LEGAL-BERT: The muppets straight out of law school. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2898– 2904, Online. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
In *International Conference on Learning Representations*.
Federal Trade Commission et al. 2012. Protecting consumer privacy in an era of rapid change. *FTC report*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Joshua Gluck, Florian Schaub, Amy Friedman, Hana Habib, Norman Sadeh, Lorrie Faith Cranor, and Yuvraj Agarwal. 2016. How short is too short? implications of length and framing on the effectiveness of privacy notices. In Twelfth Symposium on Usable Privacy and Security ({SOUPS} *2016)*, pages 321–340.
Suchin Gururangan, Ana Marasovic, Swabha ´
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics.
Hamza Harkous, Kassem Fawaz, Rémi Lebret, Florian Schaub, Kang G Shin, and Karl Aberer. 2018. Polisis: Automated analysis and presentation of privacy policies using deep learning. In 27th {USENIX} Security Symposium ({USENIX} *Security 18)*, pages 531–548.
Mitra Bokaei Hosseini, Sudarshan Wadkar, Travis D
Breaux, and Jianwei Niu. 2016. Lexical similarity of information type hypernyms, meronyms and synonyms in privacy policies. In *2016 AAAI Fall Symposium Series*.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. *Transactions of the Association for* Computational Linguistics, 8:64–77.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In International Conference on Learning Representations.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining.
Bioinformatics, 36(4):1234–1240.
Fei Liu, Rohan Ramanath, Norman Sadeh, and Noah A.
Smith. 2014. A step towards usable privacy policy:
Automatic alignment of privacy statements. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 884–894, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Florencia Marotta-Wurgler. 2015. Does "notice and choice" disclosure regulation work? an empirical study of privacy policies,". In *Michigan Law: Law* and Economics Workshop.
Kanthashree Mysore Sathyendra, Shomir Wilson, Florian Schaub, Sebastian Zimmeck, and Norman Sadeh.
2017. Identifying the provision of choices in privacy policy text. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2774–2779, Copenhagen, Denmark. Association for Computational Linguistics.
Md Rizwan Parvez, Jianfeng Chi, Wasi Uddin Ahmad, Yuan Tian, and Kai-Wei Chang. 2022. Retrieval enhanced data augmentation for question answering on privacy policies. *arXiv preprint arXiv:2204.08952*.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances in* neural information processing systems, 32.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.
Yujia Qin, Jiajie Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2022. ELLE: Efficient lifelong pre-training for emerging data. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2789–2810, Dublin, Ireland. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Rohan Ramanath, Fei Liu, Norman Sadeh, and Noah A.
Smith. 2014. Unsupervised alignment of privacy policies using hidden Markov models. In *Proceedings* of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
pages 605–610, Baltimore, Maryland. Association for Computational Linguistics.
Priyanka Ranade, Aritran Piplai, Anupam Joshi, and Tim Finin. 2021. Cybert: Contextualized embeddings for the cybersecurity domain. In 2021 IEEE
International Conference on Big Data (Big Data),
pages 3334–3342. IEEE.
Laila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, and Degui Zhi. 2021. Med-bert: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. NPJ digital medicine, 4(1):1–13.
Abhilasha Ravichander, Alan W Black, Thomas Norton, Shomir Wilson, and Norman Sadeh. 2021. Breaking down walls of text: How can NLP benefit consumer privacy? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4125–4140, Online. Association for Computational Linguistics.
Abhilasha Ravichander, Alan W Black, Shomir Wilson, Thomas Norton, and Norman Sadeh. 2019. Question answering for privacy policies: Combining computational and legal perspectives. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 4947–4958, Hong Kong, China. Association for Computational Linguistics.
Norman Sadeh, Alessandro Acquisti, Travis D Breaux, Lorrie Faith Cranor, Aleecia M McDonald, Joel R
Reidenberg, Noah A Smith, Fei Liu, N Cameron Russell, Florian Schaub, et al. 2013. The usable privacy policy project. *Technical report, Technical* Report, CMU-ISR-13-119.
Kanthashree Mysore Sathyendra, Florian Schaub, Shomir Wilson, and Norman Sadeh. 2016. Automatic extraction of opt-out choices from privacy policies. In *2016 AAAI Fall Symposium Series*.
Yan Shvartzshanider, Ananth Balashankar, Thomas Wies, and Lakshminarayanan Subramanian. 2018.
RECIPE: Applying open domain question answering to privacy policies. In *Proceedings of the Workshop* on Machine Reading for Question Answering, pages 71–77, Melbourne, Australia. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *International Conference on Learning Representations*.
Henry William. 2020. Do web apps and mobile apps need separate privacy policies?
Shomir Wilson, Florian Schaub, Aswarth Abhilash Dara, Frederick Liu, Sushain Cherivirala, Pedro Giovanni Leon, Mads Schaarup Andersen, Sebastian Zimmeck, Kanthashree Mysore Sathyendra, N. Cameron Russell, Thomas B. Norton, Eduard Hovy, Joel Reidenberg, and Norman Sadeh. 2016a.
The creation and analysis of a website privacy policy corpus. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 1330–1340, Berlin, Germany. Association for Computational Linguistics.
Shomir Wilson, Florian Schaub, Rohan Ramanath, Norman Sadeh, Fei Liu, Noah A Smith, and Frederick Liu. 2016b. Crowdsourcing annotations for websites' privacy policies: Can it really work? In *Proceedings* of the 25th International Conference on World Wide Web, pages 133–143. International World Wide Web Conferences Steering Committee.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *Proceedings of the IEEE international conference on computer vision*, pages 19–27.
Sebastian Zimmeck, Peter Story, Daniel Smullen, Abhilasha Ravichander, Ziqi Wang, Joel Reidenberg, N Cameron Russell, and Norman Sadeh. 2019. Maps:
Scaling privacy compliance analysis to a million apps.
Proceedings on Privacy Enhancing Technologies, 2019(3):66–86.
## Supplementary Material: Appendices A **Dataset Details** A.1 **Opp-115 Privacy Practices**
1. First Party Collection/Use 2. Third Party Sharing/Collection 3. User Choice/Control 4. User Access, Edit, and Deletion 5. Data Retention 6. Data Security 7. Policy Change 8. Do Not Track 9. International and Specific Audiences 10. Other
## A.2 **App-350 Privacy Practices**
1. Contact 2. Contact_Address_Book 3. Contact_City 4. Contact_E_Mail_Address 5. Contact_Password 6. Contact_Phone_Number 7. Contact_Postal_Address 8. Contact_ZIP 9. Demographic 10. Demographic_Age 11. Demographic_Gender 12. Facebook_SSO 13. Identifier 14. Identifier_Ad_ID 15. Identifier_Cookie_or_similar_Tech 16. Identifier_Device_ID 17. Identifier_IMEI 18. Identifier_IMSI 19. Identifier_IP_Address 20. Identifier_MAC
21. Identifier_Mobile_Carrier 22. Identifier_SIM_Serial 23. Identifier_SSID_BSSID
24. Location 25. Location_Bluetooth 26. Location_Cell_Tower 27. Location_GPS
28. Location_IP_Address 29. Location_WiFi 30. SSO
## B **More Details Of Pre-Training Corpora**
We use MAPS, the mobile application privacy policy corpus presented by Zimmeck et al. (2019).
MAPS consists of the URLs of 441K mobile application privacy policies, which were collected from April to May 2018 from the Google Play store. We remove the duplicated URLs, crawl the privacy policy documents in HTML/PDF format, convert them to raw text format, and filter out the documents with noise (e.g., empty documents resulting from obsolete URLs). Finally, we ended up with 64K privacy policy documents. For website privacy policies, we use the Princeton-Leuven Longitudinal Corpus of Privacy Policies (Amos et al., 2021).4 The Princeton-Leuven Longitudinal Corpus of Privacy Policies contains 130K website privacy policies spanning over two decades. We use the documents with the latest date and convert them (from markdown format) into text format.
Combining these two corpora, we obtain our pretraining corpus with 332M words.
## C **Baseline Models**
We benchmark a few pre-trained language models as baselines to facilitate future work.
BERT Devlin et al. (2019) proposed Transformer
(Vaswani et al., 2017) based language model pretrained on BooksCorpus and English Wikipedia data using masked language modeling (MLM) and next sentence prediction.
Electra Clark et al. (2020) pre-trains a generator and a discriminator on the same corpus as BERT,
where the generator takes a masked text as input and is trained using the MLM objective. The discriminator takes the predictions from the generator and detects which tokens are replaced by the generator. After pre-training, the generator is discarded, and the discriminator is used as the language model for the downstream tasks.
SpanBERT Joshi et al. (2020) shares the same architecture and pre-training corpus as BERT but differs in the pre-training objectives. It extends BERT by masking contiguous spans instead of single tokens and training the span boundary representations to predict the masked spans.
RoBERTa Liu et al. (2019) presented a replication study of BERT pretraining where they showed that BERT was significantly undertrained and proposed RoBERTa that tunes key hyperparameters 4The corpus is publicly available at https://github.
com/citp/privacy-policy-historical.
OPP-115 **Text:** Secure Online Ordering For your security, we only store your credit card information if you choose to set up an authorized account with one of our Sites. In that case, it is stored on a secure computer in an encrypted format. If you do not set up an account, you will have to enter your credit card information each time you order. We understand that this may be a little inconvenient for you, but some customers appreciate the added security.
## Classes: Data Security, User Choice/Control, First Party Collection/Use
APP-350 **Text:** *Our Use of Web Beacons and Analytics Services Microsoft web pages may contain* electronic images known as web beacons (also called single-pixel gifs) that we use to help deliver cookies on our websites, count users who have visited those websites and deliver co-branded products. We also include web beacons in our promotional email messages or newsletters to determine whether you open and act on them.
Classes: Contact_E_Mail_Address, Identifier_Cookie_or_similar_Tech
PrivacyQA **Sentence:** We may collect and use information about your location (such as your country) or infer your approximate location based on your IP address in order to provide you with tailored educational experiences for your region, but we don't collect the precise geolocation of you or your device.
## Question: Does The App Track My Location? **Answer:** Relevant
PolicyQA **Text:** *Illini Media never shares personally identifiable information provided to us online in* ways unrelated to the ones described above without allowing you to opt out or otherwise prohibit such unrelated uses. Google or any ad server may use information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you.
| Answer: information (not including your name, address, email address or telephone number) | |
|---------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
| PolicyIE | Sentence: We may also use or display your username and icon or profile photo for marketing purposes or press releases. |
Intent: Data Collection/Usage **Slots:** (1) *Data Collector: First Party Entity–We*, (2)
Action–use, (3) *Data Provider: User–your*, (4) *Data Collected: User Online Activities/Profiles–*
username, (5) *Data Collected: User Online Activities/Profiles–icon or profile photo*, (6)
Purpose: Advertising/Marketing–marketing purpose or press releases.
PIExtract**Text:** *We may share aggregate demographic and usage information with our prospective and* actual business partners, advertisers, and other third parties for any business purpose.
and uses more training data to achieve remarkable performance improvements. Note that while BERT,
Electra, and SpanBERT use the same vocabulary, RoBERTa uses a different vocabulary resulting in 15M more parameters in the model.
LEGAL-BERT Chalkidis et al. (2020) pretrained BERT using 12 GB of the English text
(over 351K documents) from several legal fields (e.g., contracts, legislation, court cases) scraped from publicly available resources. Since privacy
| Purpose: Advertising/Marketing–marketing purpose or press releases. | | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-----------------------------------------------------------------------------------|
| PIExtract | Text: | We may share aggregate demographic and usage information with our prospective and |
| actual business partners, advertisers, and other third parties for any business purpose. Entities: SHARE - aggregate demographic and usage information Table 4: Examples from the tasks in PLUE. | | |
policies serve as official documents to protect the company and consumers' privacy rights and might contain contents in response to privacy law (e.g., GDPR), we study LEGAL-BERT's effectiveness on the PLUE tasks.
## D **More Implementation Details** D.1 **Domain-Specific Continual Pre-Training**
Since BERT, Electra, and SpanBERT share the same model architectures, we use almost the same hyperparameters (e.g., learning rate, train steps, batch size) for them following the original papers.
We scale down the train steps by the same factor, as the size of our pre-training corpus is roughly 1/10 the size of the pre-training corpus of BERT.
We adhere to the guidelines outlined in Liu et al.
(2019) to train RoBERTa with larger batch size, higher learning rate, and fewer train steps. Table 5 presents the training hyperparameters for PLMs.
## D.2 **Task-Specific Fine-Tuning**
We fine-tune the models for each task using the Adam (Kingma and Ba, 2015) optimizer with a batch size of 32. We fine-tune the models on the QA tasks for 3 epochs and other tasks for 20 epochs and perform a grid search on the learning rate for each task with validation examples. We chose the learning rate for tasks without validation examples based on our findings from the tasks with validation examples. Table 6 lists the hyperparameters for all the downstream tasks.
In OPP-115 and APP-350, we compute the class weights (the class weights are inversely proportional to the occurrences of the classes) and apply them in fine-tuning, as we find out both datasets have the class-imbalance problem and using class weights brings gains to overall performance. We also report the human performances for PrivacyQA
and PolicyIE from the original works.
## D.3 **Software Tools**
To facilitate using PLUE, we release our implementation, which is built with Pytorch (Paszke et al.,
2019) and the Huggingface transformers5 package. Our implementation includes the continual pre-training of our baselines and the evaluation of any PLMs supported by the Huggingface transformers package on the PLUE benchmark tasks. In addition to PLUE datasets, we release the pretraining corpus and all data pre-processing scripts, including the pre-training corpus crawling scripts, to assist future research in this area.
| PP-BERT | PP-SpanBERT | PP-Electra | PP-RoBERTa | |
|------------------------|---------------|------------------|--------------|--------|
| Learning Rate | 1e-4 | 1e-4 | 1e-4 | 6e-4 |
| Train Steps | 100,000 | 100,000 | 100,000 | 12,500 |
| batch Size | 256 | 256 | 256 | 2048 |
| Learning Rate Schedule | linear | polynomial_decay | linear | linear |
| # warm-up steps | 1000 | 1000 | 1000 | 600 |
| Optimizer | AdamW | AdamW | AdamW | AdamW |
| Text | Question | Semantic Parsing | NER | |
|------------------------|--------------------------------------------|--------------------|--------|--------|
| Classification | Answering | | | |
| Dropout | 0.1 | 0.1 | 0.1 | 0.1 |
| Weight decay | 0.0 | 0.0 | 0.0 | 0.0 |
| Optimizer | AdamW | AdamW | AdamW | AdamW |
| Batch Size | 32 | 32 | 32 | 32 |
| Learning rate | [3e-4, 1e-4, 5e-5, 3e-5, 1e-5, 5e-6, 3e-6] | | | |
| Learning Rate Schedule | Linear | Linear | Linear | Linear |
| Warm-up Ratio | 0.05 | 0.0 | 0.05 | 0.05 |
| # epoch | 20 | 3 | 20 | 20 |
Table 5: Hyperparameters for pre-training language models.
Table 6: Hyperparameters for fine-tuning pre-trained language models on different PLUE tasks.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In the section "Limitations"
✓ A2. Did you discuss any potential risks of your work?
In the sections "Limitations" and "Ethics Statement"
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In the abstract and section 1 "Introduction"
✓ A4. Have you used AI writing assistants when working on this paper?
Yes, we use Grammarly and ChatGPT for assistance purely with the language of the paper (e.g.,
grammar error checking and paper paraphrasing). We mainly use them in the introduction.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
In section 2, we describe the creation of our benchmark.
✓ B1. Did you cite the creators of artifacts you used?
In section 2, we cite the creators of the artifacts we used.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In the section "Ethics Statement."
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
In the section "Ethics Statement."
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2.
## C ✓ **Did You Run Computational Experiments?** Section 3.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In the section "Ethics Statement."
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 2.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In appendix D.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
karoui-etal-2023-stop | Stop Pre-Training: Adapt Visual-Language Models to Unseen Languages | https://aclanthology.org/2023.acl-short.32 | Vision-Language Pre-training (VLP) has advanced the performance of many vision-language tasks, such as image-text retrieval, visual entailment, and visual reasoning. The pre-training mostly utilizes lexical databases and image queries in English. Previous work has demonstrated that the pre-training in English does not transfer well to other languages in a zero-shot setting. However, multilingual pre-trained language models (MPLM) have excelled at a variety of single-modal language tasks. In this paper, we propose a simple yet efficient approach to adapt VLP to unseen languages using MPLM.We utilize a cross-lingual contextualised token embeddings alignment approach to train text encoders for non-English languages. Our approach does not require image input and primarily uses machine translation, eliminating the need for target language data. Our evaluation across three distinct tasks (image-text retrieval, visual entailment, and natural language visual reasoning) demonstrates that this approach outperforms the state-of-the-art multilingual vision-language models without requiring large parallel corpora. Our code is available at \url{https://github.com/Yasminekaroui/CliCoTea}. | # Stop Pre-Training: Adapt Visual-Language Models To Unseen Languages
Yasmine Karouiµ∗ Rémi Lebretλ Negar Foroutanλ **Karl Aberer**λ µTechnical University of Munich, Germany λEPFL, Switzerland
## Abstract
Vision-Language Pre-training (VLP) has advanced the performance of many visionlanguage tasks, such as image-text retrieval, visual entailment, and visual reasoning. The pre-training mostly utilizes lexical databases and image queries in English. Previous work has demonstrated that the pre-training in English does not transfer well to other languages in a zero-shot setting. However, multilingual pre-trained language models (MPLM) have excelled at a variety of single-modal language tasks. In this paper, we propose a simple yet efficient approach to adapt VLP to unseen languages using MPLM. We utilize a cross-lingual contextualized token embeddings alignment approach to train text encoders for non-English languages. Our approach does not require image input and primarily uses machine translation, eliminating the need for target language data. Our evaluation across three distinct tasks (image-text retrieval, visual entailment, and natural language visual reasoning) demonstrates that this approach outperforms the state-of-the-art multilingual vision-language models without requiring large parallel corpora. Our code is available at https://github.com/Yasminekaroui/CliCoTea.
## 1 Introduction
Inspired by the recent advancements in language model pre-training, Vision-Language Pre-trained Models (VLPMs) have demonstrated state-of-theart performance across a wide range of visionlanguage (VL) tasks such as text-to-image retrieval, visual reasoning, visual entailment, and visual QA (Chen et al., 2020; Li et al., 2021, 2022).
However, extending VLPMs to multilingual scenarios is still challenging. On one hand, the majority of these models are trained on monolingual (English) corpora and thus cannot perform well for other languages. On the other hand, the multilingual pre-trained language models (Devlin et al.,
∗Yasmine performed this work while interning at EPFL.
![0_image_0.png](0_image_0.png)
Figure 1: Overview of our approach. We adapt the text encoder of a monolingual VL model to an unseen language (a). Then we use the adapted model for a VL
downstream task in a zero-shot setting (b).
2018; Conneau et al., 2019) cannot handle vision data (e.g., images or videos) directly.
Lately, there have been attempts (M3P,
nUNITER, UC2) to pivot on images or English texts to align multilingual representations with vision features (Chen et al., 2020; Ni et al.,
2021; Zhou et al., 2021). However, a recent benchmark on multilingual multimodal pretraining (IGLUE) (Bugliarello et al., 2022) shows that although these models achieve promising zeroshot cross-lingual transfer performance on some VL tasks, they still fall short in comparison to the "translate-test" baseline (using an English-only VLPM on the translations of the text examples).
A more recent work (CCLM) achieves promising performance on the IGLUE benchmark by exploiting massive parallel text and image-text corpora to pre-train a VL model (Zeng et al., 2022). This approach is motivated by a key observation that multilingual and multimodal pre-training essentially achieves the same goal of aligning two different views of the same object into a common semantic space. Although this framework performs well on 366 the IGLUE benchmark, it requires a large amount of parallel data. Its pre-training phase relies on 19M multilingual parallel sentence pairs extracted from WikiMatrix (Schwenk et al., 2021), jointly trained with 4 million image-text pairs in multiple languages.
In this work, we are proposing a simple yet efficient way to adapt VLP models to unseen languages without requiring large parallel corpora.
We propose to align a VLPM monolingual text encoder (achieving start-of-the-art performance on English downstream VL tasks) with a multilingual pre-trained language model (e.g., mBERT),
using only small in-domain parallel text corpus.
The recent progress in Neural Machine Translation (NMT) has enabled us to create such a parallel corpus from automatically translating the data from English to any other language, even for lowresource languages (i.e., Swahili). However, since our approach relies on token alignment, it is robust to errors made by NMT. Our zero-shot evaluation across three of the four IGLUE tasks shows that the proposed method achieves state-of-the-art results while using small set of in-domain parallel sentences. The key steps of our approach are illustrated in Figure 1.
## 2 Clicotea **: Cross-Lingual** Contextualised Token Embedding Alignment
We propose CLiCoTEA , an approach to transfer a monolingual vision-language (VL) pre-trained model in one language L1 where there is an abundant number of training pairs of image and text (i.e.,
English) to a second language L2. As we focus in this paper on the zero-shot setting, we do the transfer after fine-tuning the pre-trained monolingual VL model on a downstream task t, where training samples are available in language L1.
CLiCoTEA consists of six steps:
1. Pre-train a monolingual VL model on a massive collection of image-text pairs, where text is written in language L1.
2. Fine-tune the VL pre-trained model on the downstream task t in language L1.
3. Create a parallel text corpus by translating the training set from step 2 in the target language L2. Note that this step can be done automatically using neural machine translation.
4. Create a list of aligned tokens for each (potentially noisy) parallel sentence using a token alignment model.
5. Cross-lingual transfer by aligning contextualised token embeddings. As illustrated in Figure 1a, it transfers the VL fine-tuned model to the new language L2 by aligning a pre-trained multilingual LM (e.g., mBERT or XLM-R)
with the text encoder of the VL pre-trained model using the list of aligned tokens created in step 4.
6. Zero-shot transfer to L2 by swapping the monolingual text encoder from the VL pretrained model with the aligned multilingual text encoder learned in step 5. An example of visual reasoning in Indonesian is illustrated in Figure 1b.
In practice, steps 1 and 2 are the most computationally expensive. Therefore, we propose to adapt VL fine-tuned models to new languages by only doing the steps from 3 to 5 which can be computed in a few hours on a single GPU.
We note that CLiCoTEA could be used with any multimodal pre-trained model where one of the modalities is a monolingual text encoder. We focus in this paper on VL models, but CLiCoTEA could be applied for instance to a language-knowledge model such as GreaseLM (Zhang et al., 2021) or DRAGON (Yasunaga et al., 2022).
## 3 Experiment 3.1 Pre-Trained Models
Vision-Language Model In step 1 of CLiCoTEA , we use the Align BEfore Fuse
(ALBEF) framework1(Li et al., 2021) as our Vision-Language Pre-trained Model (VLPM). ALBEF has been fine-tuned on multiple downstream VL tasks and achieves state-of-the-art performance.
We use the ALBEF fine-tuned models in step 2 for the downstream tasks described in Section 3.3.
Unlike other competitive VL pre-trained models
(such as BLIP (Li et al., 2022)) that inject visual information by inserting cross-attention for each transformer block of the text encoder, ALBEF
first encodes the image and text independently with a detector-free image encoder and a text encoder. Then it uses a multimodal encoder to fuse 1Code and models are available at https://github.
com/salesforce/ALBEF.
the image features with the text features through cross-modal attention. All encoders are based on transformer networks with the text encoder being a 6-layer transformer initialised using the first 6 layers of the BERTbase. We thus extract this 6-layer text encoder for cross-lingual transfer training in step 5.
Multilingual Language Model As a multilingual pre-trained language model, we use the multilingual BERT (mBERT)2(Devlin et al., 2018). It has been trained on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective and has demonstrated remarkable zero-shot cross-lingual transfer capabilities (Wu and Dredze, 2019; Pires et al., 2019; Hu et al., 2020; Conneau et al., 2018). We extract the first 6-layer transformer to be aligned with the text encoder of ALBEF in step 5.
## 3.2 Implementation Details
Word Alignment Since the parallel sentences do not contain word-level alignment information, in step 4 of CLiCoTEA we utilize awesome-align3(Dou and Neubig, 2021)
which is a tool that automatically extracts word alignments from mBERT. The generated word pairs are then filtered for keeping only one-to-one, oneto-many or many-to-one alignments and removing many-to-many alignments. This is done for all languages except Chinese because otherwise less than 3% of the training data would remain in the set.
The advantage of this filtering is twofold: a) it removes the noise from the matching word pairs; b)
it reduces the training time and computation. For words that are split into sub-word tokens, we consider either the left-most token embedding alignment (i.e., the first sub-word token of a word) or, the average embedding across all sub-word tokens.
Contextualised Token Alignment Training Given a set of aligned contextual word pairs extracted from parallel sentences, we define
{xi, yi}
n i=1, where xi ∈ R
dis the contextualised embedding of token i in the target language (obtained from mBERT), and yi ∈ R
dis the contextualised embedding of its alignment in the source language (obtained from the fine-tuned ALBEF)4.
In step 5 of CLiCoTEA , we minimise the following training objective: Pn i=1 ||xi − yi||2.
The parameters of the source language encoder are frozen, while the ones of the target language encoder are fine-tuned at training time. The learning rate is set to 5.10−5. The batch size is set to 128.
These hyperparameters are set through the NLVR2, Flickr30k, SNLI validation sets, for each task respectively. For each target language, the training is done on a single GeForce GTX TITAN X in a few hours.
Data Augmentation As multilingual language models are generally pre-trained on the source language L1, the contextualised token alignment can be trained not only with sentences from the target language L2, but also with sentences from the source language L1. This strategy doubles the training size, and consequently, the training time but it could be used with tasks where the number of available training sentences is limited.
## 3.3 Downstream Tasks
In step 6, we evaluate CLiCoTEA on three tasks from the IGLUE benchmark5in the zero-shot setting:
- **xFlickr&CO**: The dataset is composed of 1000 images from Flickr30K (Plummer et al.,
2015) and 1000 images from MSCOCO
dataset (Lin et al., 2014). These images come along with croudsourced image captions in 6 different languages. xFlickr&CO is a *retrieval* task dataset. It is composed of two subtasks:
image-to-text retrieval (TR) and text-to-image retrieval (IR).
- **XVNLI**: The dataset consists in merging SNLI hypothesis with Flickr30K (Plummer et al., 2015) images and translate the test set in four languages. The task is called *visual* entailment (VE) which is a fine-grained reasoning task to determine whether a text hypothesis "contradicts", "entails", or is "neutral" with respect to an image.
- **MaRVL**: The dataset is a multilingual expansion of NLVR2 dataset (Suhr et al., 2017),
with images related to concepts of five languages and cultures. The task is called visual reasoning (VR) which consists in determining whether a statement is correct given a pair of images.
Table 1: The datasets used in the different steps of CLiCoTEA . Translated train and validation captions are denoted with ∗.
Table 1 shows the datasets used for a) fine-tuning the monolingual VL pre-trained model in step 2, b) training the alignment of contextualised token embeddings in step 5, and c) testing the zero-shot cross-lingual transfer in step 6. For creating the parallel corpus in step 3, all datasets used for finetuning the monolingual pre-trained VL model are translated to the corresponding test dataset languages from the IGLUE benchmark using GoogleTrans Python API6. Statistics about the translation datasets can be found in Section A.1. MaRVL
being the smallest dataset, the data augmentation strategy described in Section 3.2 is applied only for this task. Detailed results on data augmentation can be found in Section 3.2.
## 3.4 Experimental Results
Results reported in Table 2 shows that CLiCoTEA outperforms the state-of-the-art CCLM models for all downstream tasks except retrieval. The larger improvement against CCLM models is obtained in visual entailment with an increase of almost 5%.
The superiority of CLiCoTEA is especially high for Spanish (+7.68%), as can be seen from Table 10 in Section A.4. The average performance on visual reasoning is similar to CCLM, but CLiCoTEA
significantly outperforms CCLM by ±4% on the low-resource languages such as Tamil and Swahili
(results per language can be seen in Table 8 in Section A.3). For retrieval, CLiCoTEA outperforms all models except CCLM4M. It is worth mentioning that, unlike the other models, CCLM4M has been pre-trained on COCO which could explain its supe6https://pypi.org/project/googletrans/
| Step | Retrieval | VE | VR |
|----------------|-------------|-------|--------|
| Fine-tuning | Flickr30K | SNLI | NLVR2 |
| Alignment | Flickr30K∗ | SNLI∗ | NLVR2∗ |
| Zero-shot Test | xFlickr&CO | XVNLI | MaRVL |
riority on Flickr&CO dataset. More details about the results on retrieval can be found in Section A.2.
## 4 Conclusion
In this paper, we present CLiCoTEA an approach for adapting Vision-Language pre-trained models to unseen languages. Unlike other approaches that rely on an expensive pre-training phase (both in terms of data and computation), our approach adapts the contextualised token embeddings of a multilingual pre-trained language model by aligning them with the contextualised token embeddings of the VLPM text encoder. By aligning ALBEF text encoder with mBERT, we show that CLiCoTEA outperforms CCLM, which exploits massive parallel text and image-text corpora.
CLiCoTEA achieves start-of-the-art performance on visual entailment and visual reasoning, with an increase of almost 5% on visual entailment. It also demonstrates its effectiveness, especially for low-resource languages, as it does not require large corpora to do the adaptation.
| Model | VE | VR | Retrieval | |
|----------|-------|------------------|-------------|-------|
| XVNLI | MaRVL | xFlickr&CO IR TR | | |
| mUNITER | 53.69 | 53.72 | 8.06 | 8.86 |
| xUNITER | 58.48 | 54.59 | 14.04 | 13.51 |
| UC2 | 62.05 | 57.28 | 20.31 | 17.89 |
| M3P | 58.25 | 56.00 | 12.91 | 11.90 |
| CCLM3M | 74.64 | 65.91 | 67.35 | 65.37 |
| CCLM4M | 73.32 | 67.17 | 76.56 | 73.46 |
| CLiCoTEA | 78.15 | 68.09 | 67.45 | 65.07 |
## 5 Limitations
The general performance of CLiCoTEA could be improved with a better MPLM than mBERT, such as XLM-R which has a larger token vocabulary and has been pre-trained on a much larger dataset. Our approach is currently not applicable to generation tasks where a multilingual text decoder is needed to generate text in unseen languages. We leave this adaptation for future work. Unlike the statement made in Zeng et al. (2022), current multilingual VL models still do not surpass the *Translate-Test* baseline of the tasks from IGLUE benchmark. The performance of CLiCoTEA is promising but the best scores are still obtained when translating everything to English and using the (English-only) ALBEF model. The smallest difference in accuracy on MaRVL dataset between CLiCoTEA and ALBEF
with *Translate-Test* is obtained in Swahili (-2%),
while the gap is much larger (around -6%) for the other languages. Outperforming the *Translate-Test* achieved by ALBEF still remains an open challenge, especially for high-resource languages.
## References
Emanuele Bugliarello, Fangyu Liu, Jonas Pfeiffer, Siva Reddy, Desmond Elliott, Edoardo Maria Ponti, and Ivan Vulic. 2022. Iglue: A benchmark for trans- ´
fer learning across modalities, tasks, and languages. arXiv preprint arXiv:2201.11732.
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In *European conference on* computer vision, pages 104–120. Springer.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. *arXiv* preprint arXiv:2101.08231.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson.
2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In *International Conference on Machine Learning*, pages 4411–4421. PMLR.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*.
Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation.
Advances in neural information processing systems, 34:9694–9705.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco:
Common objects in context. In *European conference on computer vision*, pages 740–755. Springer.
Minheng Ni, Haoyang Huang, Lin Su, Edward Cui, Taroon Bharti, Lijuan Wang, Dongdong Zhang, and Nan Duan. 2021. M3p: Learning universal representations via multitask multilingual multimodal pretraining. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3977–3986.
Jonas Pfeiffer, Gregor Geigle, Aishwarya Kamath, JanMartin O Steitz, Stefan Roth, Ivan Vulic, and Iryna ´
Gurevych. 2021. xgqa: Cross-lingual visual question answering. *arXiv preprint arXiv:2109.06082*.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual bert? arXiv preprint arXiv:1906.01502.
Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In Proceedings of the IEEE
international conference on computer vision, pages 2641–2649.
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2021. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1351–1361, Online. Association for Computational Linguistics.
Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi.
2017. A corpus of natural language for visual reasoning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 217–223.
Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. *arXiv preprint arXiv:1904.09077*.
Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D Manning, Percy Liang, and Jure Leskovec. 2022. Deep bidirectional language-knowledge graph pretraining. *arXiv* preprint arXiv:2210.09338.
Yan Zeng, Wangchunshu Zhou, Ao Luo, and Xinsong Zhang. 2022. Cross-view language modeling: Towards unified cross-lingual cross-modal pre-training.
arXiv preprint arXiv:2206.00621.
Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, and Jure Leskovec. 2021. Greaselm: Graph reasoning enhanced language models. In *International Conference on Learning Representations*.
Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu, and Jingjing Liu.
2021. Uc2: Universal cross-lingual cross-modal vision-and-language pre-training. In *Proceedings* of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4155–4165.
## A Appendix A.1 Details Of Alignment Datasets A.2 Results On Retrieval
| Language | Total number | Avg. number of |
|--------------|----------------|------------------|
| of sentences | aligned tokens | |
| Indonesian | 86325 | 8.27 |
| Swahili | 85415 | 5.46 |
| Tamil | 85241 | 4.53 |
| Turkish | 85050 | 5.42 |
| Chinese | 86373 | 10.76 |
| Model | Language | | | | |
|----------|------------|-------|-------|-------|-------|
| DE | ES | ID | RU | TR | |
| mUNITER | 12.05 | 13.15 | 5.95 | 5.85 | 1.75 |
| xUNITER | 14.55 | 16.10 | 16.50 | 15.90 | 9.05 |
| UC2 | 28.60 | 15.95 | 14.60 | 20.00 | 7.15 |
| M3P | 13.35 | 13.40 | 13.20 | 15.95 | 7.75 |
| CCLM3M | 67.67 | 71.23 | 62.38 | 72.83 | 55.15 |
| CCLM4M | 73.65 | 79.62 | 69.50 | 80.65 | 65.08 |
| CLiCoTEA | 61.48 | 74.50 | 64.98 | 73.50 | 62.80 |
| Language | Total number | Avg. number of |
|--------------|----------------|------------------|
| of sentences | aligned tokens | |
| German | 144935 | 8.74 |
| Spanish | 144990 | 10.04 |
| Indonesian | 144858 | 7.46 |
| Russian | 144526 | 6.44 |
| Turkish | 143664 | 4.83 |
| Language | Total number | Avg. number of |
|--------------|----------------|------------------|
| of sentences | aligned tokens | |
| Arabic | 513683 | 2.95 |
| Spanish | 549785 | 6.31 |
| French | 549260 | 5.78 |
| Russian | 524308 | 3.60 |
Table 5: Statistics about NLVR2 translation set.
with CCLM3M, CCLM4M has been trained with 1M
additional image-text pairs from Visual Genome and COCO datasets. The gap in performance between the two models on retrieval tasks suggests that pre-training with COCO text-image pairs gives a clear advantage to CCLM4M as Flickr&CO contains 1000 images from COCO, while all other models have been fine-tuned only on Flickr30K.
Tables 3, 4, and 5 show the average number of aligned tokens extracted from the translated sentences of Flickr30k, SNLI, and NLVR2, respectively.
Table 3: Statistics about Flickr30k translation set.
Table 6: Zero-shot performance on multi-lingual imagetext retrieval with Flickr&CO dataset. Recall@1 is reported.
Table 4: Statistics about SNLI translation set.
## A.3 Results On Natural Language Visual Reasoning
Table 8 shows the zero-shot performance on the MaRVL dataset, and the natural language visual reasoning task from the IGLUE benchmark, for all available languages (ID: Indonesian, SW: Swahili, TA: Tamil, TR: Turkish, ZH: Chinese).
As MaRVL is the smallest dataset among the three tasks from IGLUE, we apply the data augmentation for training the alignment as described in Section 3.2. Results reported in Table 9 show that there is drop of 3.35% for Turkish, and 9.99%
for Chinese when training only using the target language L2, while there is no significant difference for the three other languages (Indonesian, Swahili, Zero-shot performance on the Flickr&CO dataset, the image-text and text-image retrieval tasks from the IGLUE benchmark, for four available languages (DE: German, ES: Spanish, ID: Indonesian, RU: Russian, TR: Turkish) are reported in Table 6 and Table 7, respectively. CLiCoTEA outperforms all models except CCLM4M. Compared
| Model | Language | | | | |
|----------|------------|-------|-------|-------|-------|
| DE | ES | ID | RU | TR | |
| mUNITER | 11.85 | 13.05 | 7.55 | 6.80 | 3.25 |
| xUNITER | 13.25 | 15.10 | 16.75 | 14.80 | 10.05 |
| UC2 | 23.90 | 15.30 | 13.60 | 16.75 | 6.95 |
| M3P | 11.85 | 12.15 | 12.10 | 14.45 | 8.35 |
| CCLM3M | 66.88 | 68.58 | 60.33 | 69.90 | 54.22 |
| CCLM4M | 73.60 | 78.38 | 67.67 | 80.35 | 63.22 |
| CLiCoTEA | 70.34 | 71.42 | 57.77 | 69.80 | 56.00 |
Table 7: Zero-shot performance on multi-lingual textimage retrieval with Flickr&CO dataset. Recall@1 is reported.
Table 8: Zero-shot performance on visual reasoning with MaRVL dataset. Accuracy is reported.
and Tamil). As explained in Section 3.2, our noise filtering technique does not work well with Chinese. Aligning the English sentences with half of the original training set helped the model infer knowledge from English and reduced the number of wrong matching words. For Turkish, the increase in performance could be explained by the similarity between the two alphabets.
Table 9: Zero-shot performance of CLiCoTEA on visual reasoning with MaRVL dataset using monolingual (L1) or bilingual (L1 + L2) alignment training. Accuracy is reported.
## A.4 Results On Visual Entailment
Zero-shot performance on the XVNLI dataset, the visual entailment task from the IGLUE benchmark, for all available languages (AR: Arabic, ES: Spanish, FR: French, RU: Russian) are reported in Table 10. CLiCoTEA outperforms other models by a significant margin for all languages, except Russian where CCLM3M achieves comparable performance.
Table 10: Zero-shot performance on visual entailment with XVNLI dataset. Accuracy is reported.
| Model | Language | | | | |
|----------|------------|-------|-------|-------|-------|
| ID | SW | TA | TR | ZH | |
| mUNITER | 54.79 | 51.17 | 52.66 | 54.66 | 55.34 |
| xUNITER | 55.14 | 55.51 | 53.06 | 56.19 | 53.06 |
| UC2 | 56.74 | 52.62 | 60.47 | 56.70 | 59.88 |
| M3P | 56.47 | 55.69 | 56.04 | 56.78 | 55.04 |
| CCLM3M | 67.81 | 61.55 | 60.28 | 69.60 | 70.52 |
| CCLM4M | 71.66 | 67.21 | 60.36 | 66.75 | 69.86 |
| CLiCoTEA | 69.55 | 71.30 | 63.93 | 70.72 | 64.93 |
## A.5 In-Domain Vs Open-Domain Data
Table 11: Zero-shot performance on visual reasoning with MaRVL dataset. Alignment is done with a subset from XNLI dataset.
In order to eliminate the need for machine translations from CLiCoTEA in step 3, we created a parallel text corpus with sentences obtained from XNLI (Conneau et al., 2018) which is publicly available and covers 15 languages. A subset of XNLI has been used for training the alignment by considering only the sentences that were semantically close to the captions in NLVR2. To do so, we used the Sentence-Transformers framework7to compute sentence embeddings sim-7Available at https://www.sbert.net.
| Training Set | Language | | | | |
|----------------|------------|-------|-------|-------|-------|
| ID | SW | TA | TR | ZH | |
| L1 | 69.55 | 71.30 | 63.45 | 67.37 | 54.94 |
| L1 + L2 | 68.53 | 70.31 | 63.93 | 70.72 | 64.93 |
| Model | Language | | | |
|----------|------------|-------|-------|-------|
| AR | ES | FR | RU | |
| mUNITER | 46.73 | 56.96 | 59.36 | 51.72 |
| xUNITER | 51.98 | 58.94 | 63.32 | 59.71 |
| UC2 | 56.19 | 57.47 | 69.67 | 64.86 |
| M3P | 55.24 | 58.85 | 56.36 | 62.54 |
| CCLM3M | 71.04 | 75.80 | 78.14 | 73.56 |
| CCLM4M | 69.68 | 73.65 | 77.54 | 72.40 |
| CLiCoTEA | 75.83 | 83.48 | 80.17 | 73.13 |
| Language | Total number | Accuracy |
|--------------|----------------|------------|
| of sentences | in % | |
| Swahili | 50400 | 63.27 |
| Turkish | 50418 | 66.61 |
| Chinese | 51159 | 59.09 |
ilarities between NLVR2 captions and XNLI English sentences and kept only the sentences with a cosine similarity higher than 0.5. About 50k English sentences from XNLI are semantically close to NLVR2 captions, we thus selected their parallel sentences in Swahili, Turkish and Chinese to perform an evaluation on MaRVL dataset. After the contextualised token alignment training on XNLI-based datasets, our results in Table 11 suggest that a multilingual open-domain dataset gives better results than mUNITER and xUNITER but underperforms the results obtained by translating in-domain training sets. This could be explained by the fact that although these datasets are multilingual, the sentences are not semantically close enough to NLVR2 captions.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5
✗ A2. Did you discuss any potential risks of your work?
We could not think of any risk as we do not introduce any model or dataset.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
✓ B1. Did you cite the creators of artifacts you used?
Section 3
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We cited the datasets website that includes the licenses.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We use the datasets only for evaluation.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No, because we have employed widely used public datasets and have not collected any data ourselves.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3 and Appendix A
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Sections 2 and 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
he-etal-2023-buca | {BUCA}: A Binary Classification Approach to Unsupervised Commonsense Question Answering | https://aclanthology.org/2023.acl-short.33 | Unsupervised commonsense reasoning (UCR) is becoming increasingly popular as the construction of commonsense reasoning datasets is expensive, and they are inevitably limited in their scope. A popular approach to UCR is to fine-tune language models with external knowledge (e.g., knowledge graphs), but this usually requires a large number of training examples. In this paper, we propose to transform the downstream multiple choice question answering task into a simpler binary classification task by ranking all candidate answers according to their reasonableness. To this end, for training the model, we convert the knowledge graph triples into reasonable and unreasonable texts. Extensive experimental results show the effectiveness of our approach on various multiple choice question answering benchmarks. Furthermore, compared with existing UCR approaches using KGs, ours is less data hungry. |
## Buca: A Binary Classification Approach To Unsupervised Commonsense Question Answering
Jie He1and **Simon Chi Lok U**1and **Víctor Gutiérrez-Basulto**2and **Jeff Z. Pan**1 1ILCC, School of Informatics, University of Edinburgh, UK
2 School of Computer Science and Informatics, Cardiff University, UK
[email protected], [email protected] [email protected], [email protected]
## Abstract
Unsupervised commonsense reasoning (UCR)
is becoming increasingly popular as the construction of commonsense reasoning datasets is expensive, and they are inevitably limited in their scope. A popular approach to UCR is to fine-tune language models with external knowledge (e.g., knowledge graphs), but this usually requires a large number of training examples.
In this paper, we propose to transform the downstream multiple choice question answering task into a simpler binary classification task by ranking all candidate answers according to their reasonableness. To this end, for training the model, we convert the knowledge graph triples into reasonable and unreasonable texts. Extensive experimental results show the effectiveness of our approach on various multiple choice question answering benchmarks. Furthermore, compared with existing UCR approaches using KGs, ours is less data hungry. Our code is available at https://github.com/probe2/BUCA
## 1 Introduction
Commonsense reasoning has recently received significant attention in NLP research (Bhargava and Ng, 2022), with a vast amount of datasets now available (Levesque, 2011; Gordon et al., 2012; Sap et al., 2019; Rashkin et al., 2018; Bisk et al.,
2020; Talmor et al., 2019). Most existing methods for commonsense reasoning either fine-tune large language models (LMs) on these datasets (Lourie et al., 2021) or use knowledge graphs (KGs) (Pan et al., 2017) to train LMs (Liu et al., 2019a; Yasunaga et al., 2022). However, it is not always possible to have relevant training data available, it is thus crucial to develop unsupervised approaches to commonsense reasoning that do not rely on labeled data.
In this paper, we focus on the unsupervised multiple choice question answering (QA) task: given a question and a set of answer options, the model is expected to predict the most likely option. We
![0_image_0.png](0_image_0.png)
John wanted to be a better dancer.
John wanted to be social with their friends.
reasonable score seperately
(A) do the dance on their own (0.5)
(B) **learn to dance (0.8)**
(C) dance well (0.6)
propose **BUCA**, a binary classification framework for unsupervised commonsense QA. Our method roughly works as follows: we first convert knowledge graph triples into textual form using manually written templates, and generate positive and negative question-answer pairs. We then fine-tune a pretrained language model, and leverage contrastive learning to increase the ability to distinguish reasonable from unreasonable ones. Finally, we input each question and all options of the downstream commonsense QA task into BUCA to obtain the reasonableness scores and select the answer with the highest reasonableness score as the predicted answer. Experimental results on various commonsense reasoning benchmarks show the effectiveness of our proposed BUCA framework. Our main contributions are:
- We propose a binary classification approach to using KGs for unsupervised commonsense question answering.
- We conduct extensive experiments, showing the effectiveness of our approach by using much less data.
## 2 Related Work
Language models are widely used in unsupervised commonsense inference tasks, e.g. as an additional knowledge source or as a scoring model. Rajani 376 et al. (2019) propose an explanation generation model for the CommonsenseQA dataset. Self-talk
(Shwartz et al., 2020) uses prompts to stimulate GPT and generate new knowledge. SEQA (Niu et al., 2021) generates several candidate answers using GPT2 and then ranks each them.
Another research direction in unsupervised commonsense reasoning is the use of e.g. commonsense KGs (Speer et al., 2016; Romero et al., 2019; Malaviya et al., 2020) to train the model (Chen et al., 2021; Geng et al., 2023). In Banerjee and Baral (2020), given the inputs of context, question and answer, the model learns to generate one of the inputs given the other two. Ma et al. (2021) update the model with a margin ranking loss computed on positive and negative examples from KGs. MICO
(Su et al., 2022) uses the distance between the positive and negative question-answer pairs obtained from the KG to calculate the loss. However, all of the above approaches demand a large amount of training data, sometimes reaching million of training samples, while BUCA only needs tens of thousands, cf. Table 2. The most similar to our work is NLI-KB (Huang et al., 2021), which trains a model on NLI data, then applies the corresponding knowledge to each question-answer pair on the downstream task. Our paper, instead, shows that is not the NLI data but the retrieved knowledge that helps.
## 3 Methodology
We focus on the following multiple choice question answering (QA) task: given a question q and a set of options A, the model should select the most likely single answer Ai ∈ A. We consider an unsupervised setting in which the model does not have access to the training or validation data. Our BUCA approach first trains the model with a knowledge graph and then uses the trained model to test on multiple QA downstream tasks. Formally, a *knowledge graph (KG)* (Pan et al., 2017) G is a tuple (*V, R, T*), where V is a set of entities, E is a set of relation types and T is a set of triples of the form (*h, r, t*) with *h, t* ∈ V the *head* and *tail* entities and r ∈ R the *relation* of the triple connecting h and t.
Our approach has three main components:
knowledge graph transfer to training data, training loss design, and downstream task testing:
Converting Triples into Binary Classification Training Data. Inspired by previous work (Su et al., 2022), each KG triple is converted into question-answer pairs by using pre-defined templates, so that the obtained pairs are then used as the input of the classification task. We use the templates provided in (Hwang et al., 2020). For example, the ATOMIC triple (PersonX thanks PersonY
afterwards, isAfter, PersonX asked PersonY for help on her homework) can be converted to "*After PersonX asked PersonY for help on her homework, PersonX thanks PersonY afterwards*". In the appendix we show the distribution of the converted sequence pairs. Along with the correct QA pairs created from the KG triples, our framework is also trained on negative QA pairs, so it can better discriminate between reasonable and unreasonable QA pairs.
More precisely, in the training dataset, each correct QA pair generated from a triple tp = (*h, r, t*) has a corresponding negative pair obtained from a variation of tp in which t is substituted by t′, which is randomly drawn from the existing tails in the KG.
Training Loss. For our binary classification model, we add a classification head with two nodes to the pre-trained language model. After normalizing the values on these two nodes, we can obtain reasonable and unreasonable scores for the QA
pairs. From the triple conversion step, we obtained n training examples, each consisting of a question q, correct answer ac, and incorrect answer aw. For each question-answer pair, we can then obtain the reasonable and unreasonable scores r
+
iand r
−
iafter applying a softmax layer. In each loss calculation, we jointly consider the correct and incorrect answers. For binary classification, we use two kinds of losses: *Traditional Binary Loss (TBL).*
$$\mathcal{L}=-\sum_{i=1}^{n}(l o g(p_{a_{c}}^{+})+l o g(p_{a_{w}}^{-}))$$
where p
+
ac and p−
aw are the probabilities of correct and incorrect answers, respectively corresponding to reasonable and unreasonable scores.
Margin Ranking Loss.
$$\begin{array}{c}{{{\mathcal{L}}=\sum_{i=1}^{n}m a x(0,\eta-l o g(p_{a_{c}}^{+})+l o g(p_{a_{w}}^{+}))}}\\ {{\qquad\qquad+m a x(0,\eta-l o g(p_{a_{w}}^{-})+l o g(p_{a_{c}}^{-}))}}\end{array}$$
where η is a margin threshold hyper-parameter.
In order to pull the representational distance between reasonable question-answer pairs as close as possible and to push the representational distance
Methods Backbone Knowledge Source COPA OpenbookQA SIQA CSQA SCT
dev test dev test dev dev dev
Random - - 50.0 50.0 25.0 25.0 33.3 25.0 50.0
RoBERTa-L RoBERTa-L - 54.8 58.4 31.2 31.6 39.7 31.2 65.0
GPT2-L GPT2-L - 62.4 63.6 31.2 29.4 42.8 40.4 66.7
Self-talk GPT2 GPT2 66.0 - 28.4 30.8 46.2 32.4 -
Dou ALBERT ALBERT - - 41.6 39.8 44.1 50.9 -
Wang GPT2 GPT2 69.8 - - - 47.3 - 71.6
SMLM RoBERTa-L e.g., ATOMIC - - 34.6 33.8 48.5 38.8 -
MICO RoBERTa-L Concept 73.2 75.2 - - 44.6 51.0 -
MICO RoBERTa-L ATOMIC 79.4 77.4 - - 56.0 44.2 -
NLI-KB RoBERTa-L Concept 65.0 62.2 35.0 35.6 46.9 49.0 71.2 NLI-KB RoBERTa-L ATOMIC 65.2 61.6 39.0 37.2 46.7 52.1 72.1
Ma RoBERTa-L CSKG - - - - 63.2 67.4 -
BUCA RoBERTa-L/TBL Concept 84.4 **90.6** 43.0 47.2 53.5 63.5 87.3
BUCA RoBERTa-L/MRL Concept **86.2** 89.6 45.2 **47.6** 52.6 **65.4** 88.0 BUCA RoBERTa-L/TBL ATOMIC 85.0 86.0 **45.8** 44.2 60.2 58.7 **88.4**
BUCA RoBERTa-L/MRL ATOMIC 84.6 87.8 43.2 46.0 **61.4** 60.3 85.5
between reasonable and unreasonable ones as far as possible, we use supervised contrastive learning
(Gunel et al., 2021) along with the binary classification. This is done by considering as positive examples of a given example within a category, all those examples within the same category.
Contrastive Loss of the i*-th QA pair*
$${\mathcal{L}}_{s c l}=\sum_{j=1}^{N}1_{y_{i}=y_{j}}l o g{\frac{e^{s i m(h_{j},h_{i})\tau}}{\sum_{k=1}^{N}1_{i\neq k}e^{s i m(h_{k},h_{i})/\tau}}}$$
where τ is the temperature parameter and h denotes the feature vector.
Inference. In the prediction phase for each candidate answer, we calculate its reasonableness score.
We choose the answer with the highest reasonableness score as the predicted answer.
## 4 Experiments
In this section, we first describe our experiments on five commonsense question answering datasets, followed by ablation studies and data analysis.
## 4.1 Datasets And Baselines
We use two well-known commonsense KGs for training our framework: ConceptNet (Speer et al., 2017) and ATOMIC (Sap et al., 2018). For evaluation, we use five commonsense QA
datasets: COPA (Gordon et al., 2012), OpenBookQA (Mihaylov et al., 2018), SIQA (Sap et al., 2019), CSQA (Talmor et al., 2019), and SCT (Mostafazadeh et al., 2017), covering a wide range of topics within commonsense reasoning.
We compare our approach with various baselines:
RoBERTa-Large (Liu et al., 2019b), GPT2 (Radford et al., 2019), Self-talk (Shwartz et al., 2020),
Dou (Dou and Peng, 2022), Wang (Wang and Zhao, 2022) and other unsupervised systems using KGs:
SMLM (Banerjee and Baral, 2020), MICO (Su et al., 2022), NLI-KB (Huang et al., 2021) and Ma
(Ma et al., 2021). Most reported results are collected from the literature. For NLI-KB, we used their publicly available code to get the results.
Details of the KGs and datasets, as well as implementation details, can be found in the appendix.
## 4.2 Main Results
Table 1 shows the results for the five benchmarks.
Overall, BUCA achieves the best performance on all datasets. More precisely, our results respectively outperform baselines on the validation and test sets as follows: MICO by 6.8% and 13.2% on COPA; Dou by 4.2% and 7.8% on OpenbookQA.
We also outperform MICO by 5.4% on SIQA; NLIKB by 13.3% on CSQA, and NLI-KB by 16.3%
on SCT. Ma does not provide results for COPA,
| Methods | Dataset | Train Pair | Valid Pair |
|-----------|------------|--------------|--------------|
| Ma | ConceptNet | 363,646 | 19,139 |
| Ma | ATOMIC | 534,834 | 60,289 |
| Ma | WikiData | 42,342 | 2,229 |
| Ma | WordNet | 256,922 | 13,523 |
| MICO | WordNet | 256,922 | 13,523 |
| MICO | ATOMIC | 1,221,072 | 48,710 |
| BUCA | ConceptNet | 65,536 | 7,836 |
| BUCA | ATOMIC | 61,053 | 2,435 |
Backbone CKG COPA OpenbookQA SIQA CSQA SCT
dev test dev test dev dev dev
BERT-base Concept 63.0 67.6 29.6 32.8 40.5 49.6 64.9
BERT-base ATOMIC 64.8 73.2 31.2 34.0 45.0 45.3 68.7 RoBERTa-base Concept 70.0 72.8 30.0 32.8 46.6 49.0 65.6
RoBERTa-base ATOMIC 70.4 77.4 33.4 34.2 50.6 46.9 70.6
RoBERTa-large Concept 86.2 89.6 45.2 47.6 52.6 65.4 88.0 RoBERTa-large ATOMIC 84.6 87.8 43.2 46.0 61.4 60.3 85.5
Table 3: Backbone model study
Backbone CKG COPA OpenbookQA SIQA CSQA SCT
dev test dev test dev dev dev
RoBERTa-large Concept 86.2 89.6 45.2 47.6 52.6 65.4 88.0 w/o contrastive Concept 83.3 89.0 42.6 46.8 51.9 64.5 87.0
RoBERTa-large ATOMIC 84.6 87.8 43.2 46.0 61.4 60.3 85.5
w/o contrastive ATOMIC 84.2 86.6 42.0 44.0 60.6 59.8 84.1
Table 4: The influence of contrastive learning OpenBookQA and SCT, but it achieves state-ofthe-art results on CSQA 67.4 and on SIQA 63.2, while BUCA's best results respectively are 65.4 and 61.4. However, Ma uses multiple KGs to train a single model, ConceptNet, WordNet, and Wikidata for CSQA and ATOMIC, ConceptNet, WordNet, and Wikidata for SIQA, with a total training data of 662,909 and 1,197,742, while BUCA only uses 65,536 and 61,530, cf. Table 2. Considering the difference on used training data and the closeness of results, BUCA's approach clearly demonstrates its effectiveness. We can also observe the same trend as in MICO: ConceptNet is more helpful for CSQA
and ATOMIC is more helpful for SIQA. This is explained by the fact that SIQA is built based on ATOMIC and CSQA is built based on ConceptNet.
On other datasets our framework shows similiar behavior with both KGs. As for the loss functions, the margin ranking loss is on average 0.8% higher than the binary loss on ConceptNet, and 0.1% higher on ATOMIC. These results are explained by the fact that the ranking loss separates more the scores between reasonable and unreasonable answers. In light of this, we will only consider margin ranking loss in the below analysis.
## 4.3 Ablation Studies
In this section, we analyze the effects of the backbone models, the effect of contrastive learning, and explore the vocabulary overlap between the knowledge training set and the downstream task as well as the accuracy of our BUCA method.
Backbone Pre-trained LMs Our experiments using different backbone models show that in general the stronger the PLM the better the performance on the downstream task. Regarding the KGs, in the BERT-base and RoBERTa-base variants, the ATOMIC-trained models perform better than the ConceptNet-trained models, while in the RoBERTa-large one they perform similarly. This might be explained by the fact that as the model capacity increases it has more inherently available event-like commonsense knowledge, necessary in the ATOMIC-based datasets. Detailed results are shown in Table 3.
Effects of Contrastive Learning Our experiments show that the RoBERTa-large variant with contrastive learning outperforms the version without it on all datasets, regardless of the used KG.
Detailed results are shown in Table 4.
## Accuracy Of The Binary Classifier Inspired By
Ghosal et al. (2022), we evaluate how often input sequences corresponding to correct and incorrect answers are accurately predicted. To this end, we use the RoBERTa-large variant trained on ATOMIC. Table 5 shows that our model tends to predict all answers as reasonable since in our training set the negative examples are randomly selected, many QA pairs are semantically irrelevant or even ungrammatical. For the manually crafted candidate answers, many of them are semantically relevant and grammatical, so our model predicts them as reasonable. We also see that the accuracy metrics for SCT and COPA are the highest. Our findings are consistent with Ghosal et al. (2022).
## 4.4 Data Analysis
To better understand why transfer learning from CKGs is more suitable than from other datasets
| Dataset | Prediction All | | | | |
|-------------------|------------------|--------------|------------|----------|------|
| Neg | Pos | Incor as Neg | Cor as Pos | Accurate | |
| COPA (dev) | 0.2 | 88.0 | 11.2 | 99.0 | 11.0 |
| COPA (test) | 0.4 | 88.4 | 11.2 | 99.2 | 10.8 |
| OpenbookQA (dev) | 1.4 | 67.8 | 4.8 | 93.2 | 3.4 |
| OpenbookQA (test) | 1.8 | 73.8 | 2.8 | 93.0 | 1.0 |
| SIQA (dev) | 6.3 | 50.2 | 15.7 | 86.7 | 9.4 |
| CSQA (dev) | 1.2 | 35.1 | 6.5 | 94.2 | 5.2 |
| SCT (dev) | 0.3 | 87.8 | 11.8 | 99.4 | 11.6 |
(i.e. MNLI or QNLI) in the commonsense QA task, we performed an analysis on the training data in NLI-KB (Huang et al., 2021) and the used CKGs.
Following (Mishra et al., 2021), we first compare the vocabulary overlap of ConceptNet, ATOMIC
and MNLI (training data) with our evaluation QA
datasets. We follow the definition of overlap introduced in (Mishra et al., 2021). Table 6 shows that MNLI has higher vocabulary overlap with all the evaluation datasets than both used CKGs. However, the results for NLI-KB in Table 1 show that the vocabulary overlap is not a key factor for performance as otherwise, NLI-KB fine-tuned with the NLI datasets (before injecting knowledge) should perform better that the other models in the downstream task due to the high lexical similarity.
| Concept | ATOMIC | MNLI | |
|-------------------|----------|--------|------|
| COPA (dev) | 50.4 | 70.0 | 98.0 |
| COPA (test) | 52.1 | 71.9 | 86.4 |
| OpenbookQA (dev) | 48.4 | 54.8 | 92.1 |
| OpenbookQA (test) | 48.8 | 55.2 | 93.1 |
| SIQA (dev) | 37.3 | 54.6 | 94.5 |
| CSQA (dev) | 59.1 | 63.2 | 85.0 |
| SCT (dev) | 41.2 | 57.5 | 94.5 |
Table 6: Vocabulary Overlap
| Question: After a long grueling semester, Tracy took the final exam and finished their course today. Now they would graduate. Why did Tracy do this? Answer: complete their degree on time | |
|------------|-----------------------------------------------------------------------|
| MNLI | Because I had a deadline. This entails I had to finish by that time. |
| ATOMIC | Tracy wants finish before time expires. because Tracy takes the exam. |
| ConceptNet | pass class causes graduation. |
![4_image_1.png](4_image_1.png)
We also analyze the distance to the sentence embeddings. Our results show that the MNLI entries performed poorly in commonsense knowledge retrieval for SIQA-queries as they are not reasonable answers. In contrast, the sentences generated from ATOMIC and ConceptNet successfully pair the SIQA-questions with reasonable answers. This reveals that, although MNLI has a higher lexical coverage, MNLI does not have suitable examples to match SIQA questions. Thus models fine-tuned with the NLI dataset hardly get any benefit for downstream commonsense reasoning tasks. Tables 7 and 8 present a random sample showing this, where reasonable alternatives are in bold.
![4_image_0.png](4_image_0.png)
## 5 Conclusion
We presented a framework converting KGs into positive/negative question-answer pairs to train a binary classification model, discriminating whether a sentence is reasonable. Extensive experiments show the effectiveness of our approach, while using a reasonably small amount of data. For future work, we will explore how to better select negative cases.
## Limitations
The method to select negative examples could be improved, as randomly selecting negative examples for training might lead to identifying most of examples in the evaluation datasets as reasonable.
Secondly, we did not explore using other number of candidates in the training set, we always use 2 candidate answers for each question.
## Acknowledgments
This work is supported by the Chang Jiang Scholars Program (J2019032).
## References
Pratyay Banerjee and Chitta Baral. 2020. Selfsupervised knowledge triplet learning for zero-shot question answering. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 151–162, Online. Association for Computational Linguistics.
Prajjwal Bhargava and Vincent Ng. 2022. Commonsense knowledge reasoning and generation with pretrained language models: A survey. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11):12317–12325.
Yonatan Bisk, Rowan Zellers, Ronan Le bras, Jianfeng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. *Proceedings of the AAAI Conference on Artificial Intelligence*,
34(05):7432–7439.
Jiaoyan Chen, Yuxia Geng, Zhuo Chen, Ian Horrocks, Jeff Z. Pan, and Huajun Chen. 2021. Knowledgeaware Zero-Shot Learning: Survey and Perspective.
In *Proceedings of IJCAI*, pages 4366–4373.
Zi-Yi Dou and Nanyun Peng. 2022. Zero-shot commonsense question answering with cloze translation and consistency optimization. In *The Thirty-Sixth AAAI*
Conference on Artificial Intelligence (AAAI).
Y Geng, J Chen, X Zhuang, Z Chen, J Z Pan, J Li, and H Chen Z Yuan. 2023. Benchmarking knowledgedriven zero-shot learning. *Journal of Web Semantics*.
Deepanway Ghosal, Navonil Majumder, Rada Mihalcea, and Soujanya Poria. 2022. Two is better than many?
binary classification as an effective approach to multichoice question answering.
Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. 2012. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In **SEM 2012: The First Joint* Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of
the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 394–398, Montréal, Canada. Association for Computational Linguistics.
Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In *International Conference on Learning Representations*.
Canming Huang, Weinan He, and Yongmei Liu. 2021.
Improving Unsupervised Commonsense Reasoning Using Knowledge-Enabled Natural Language Inference. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 4875–4885, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2020. COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs.
Hector J. Levesque. 2011. The winograd schema challenge. In *AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning*. AAAI.
Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel.
2016. Commonsense Knowledge Base Completion.
In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1445–1455, Berlin, Germany.
Association for Computational Linguistics.
Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2019a. K-bert:
Enabling language representation with knowledge graph.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach.
Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35(15):13480–13488.
Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, and Alessandro Oltramari. 2021.
Knowledge-driven data construction for zero-shot evaluation in commonsense question answering. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35(15):13507–13515.
Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. 2020. Commonsense knowledge base completion with structural and semantic context. In *Proceedings of AAAI*.
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering. ArXiv:1809.02789 [cs].
Anshuman Mishra, Dhruvesh Patel, Aparna Vijayakumar, Xiang Lorraine Li, Pavan Kapanipathi, and Kartik Talamadupula. 2021. Looking Beyond SentenceLevel Natural Language Inference for Question Answering and Text Summarization. In *Proceedings* of ACL, pages 1322–1336, Online. Association for Computational Linguistics.
Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. 2017. LSDSem 2017 Shared Task: The Story Cloze Test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 46–51, Valencia, Spain. Association for Computational Linguistics.
Yilin Niu, Fei Huang, Jiaming Liang, Wenkai Chen, Xiaoyan Zhu, and Minlie Huang. 2021. A semanticbased method for unsupervised commonsense question answering. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3037–3049, Online. Association for Computational Linguistics.
J. Z. Pan, G. Vetere, J.M. Gomez-Perez, and H. Wu, editors. 2017. Exploiting Linked Data and Knowledge Graphs for Large Organisations. Springer.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A.
Smith, and Yejin Choi. 2018. Event2Mind: Commonsense inference on events, intents, and reactions.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 463–473, Melbourne, Australia.
Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2020. Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation. ArXiv:2004.09813 [cs].
Julien Romero, Simon Razniewski, Koninika Pal, Jeff Z.
Pan, Archit Sakhadeo, and Gerhard Weikum. 2019.
Commonsense Properties from Query Logs and Question Answering Forums. In Proc. of 28th ACM International Conference on Information and Knowledge Management (CIKM 2019), pages 1411–1420.
Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2018.
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463–
4473, Hong Kong, China. Association for Computational Linguistics.
Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 4615–4629, Online. Association for Computational Linguistics.
Robert Speer, Joshua Chin, and Catherine Havasi. 2016.
Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI Conference on Artificial Intelligence.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, page 4444–4451. AAAI Press.
Ying Su, Zihao Wang, Tianqing Fang, Hongming Zhang, Yangqiu Song, and Tong Zhang. 2022. Mico: A
multi-alternative contrastive learning framework for commonsense knowledge representation.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics.
Jiawei Wang and Hai Zhao. 2022. ArT: All-round thinker for unsupervised commonsense question answering. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1490–1501, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D Manning, Percy Liang, and Jure Leskovec. 2022. Deep bidirectional language-knowledge graph pretraining. In *Advances* in Neural Information Processing Systems (NeurIPS).
## Appendix A Kgs, Datasets, And Implementation
This section contains more experimental details.
In particular, we give details of the used KGs and datasets. We also discuss implementation details.
## Conceptnet
ConceptNet (Speer et al., 2017) is a traditional KG
that focuses on taxonomic, lexical and physical relations (e.g., IsA, RelatedTo, *PartOf*). In our experiment, we employed the CN-82K version which is uniformly sampled from a larger set of extracted ConceptNet entity-relations (Li et al., 2016).
## Atomic
The ATOMIC KG (Sap et al., 2018) focuses on social-interaction knowledge about everyday events, and thus has a higher coverage in the field of commonsense query answering. It consists of 880K
knowledge triples across 9 relations (e.g. xNeed, oEffect, xReact). This includes mentions of topics such as causes and effects, personal feelings toward actions or events, and conditional statements.
The ATOMIC dataset is collected and validated completely through crowdsourcing.
As seen in Table 2, in comparison to related works: Ma (Ma et al., 2021) and MICO (Su et al.,
2022), our methods used much less data from the CKGs (**~5-8x** Ma, **~2-20x** MICO) while still maintaining competitive performance on the evaluation dataset.
## A.1 Generation Of Qa Pairs
The QA pairs were generated using the templates in the ATOMIC paper (Hwang et al., 2020), which is compatible with relations in both ConceptNet and ATOMIC. These templates help to convert KG triples into natural sentences, examples shown in Table 9. The head entity and mapped relation phrases are joined as a question. The correct tail entity and a randomly sampled tail from the dataset are used as the positive and negative answers, respectively, for contrastive learning.
## A.2 Evaluation Datasets
We evaluate our framework using five downstream QA tasks: COPA, OpenBookQA, SIQA, CSQA, and SCT, which covere a wide range of topics within commonsense reasoning. Accuracy is used as the evaluation metric. All experiments are perform in an unsupervised setting, where our model are not train on the source task.
Choice of Plausible Alternatives (COPA) (Gordon et al., 2012) is a two-choice question-answer dataset designed to evaluate performance in opendomain commonsense causal reasoning. Each entry contains a premise and two possible answers, the task is to select the answers that most likely have a causal relationship with the premise. The dataset consists 500 questions for both debvelopment and test sets.
OpenBookQA (Mihaylov et al., 2018) is inspired from open book exams that assess human understanding in real life. This QA task requires a deeper understanding about both open book facts
(e.g., *metals is a heat conductor*) and a broad common knowledge (e.g., a steal spoon is made of metal) to answer questions like: *Which of these objects conducts the most heat: A metal spoon, pair* of jeans, or cotton made clothing? It contains 500 multiple-choice science questions for both development and test sets.
SocialIQA (SIQA) (Sap et al., 2019) contains multiple-choice questions with topics concerned with emotional and social interactions in a variety of everyday situations. Each entry comes with a context, a question, and 3 candidate answers. The questions are generated using the ATOMIC KG by converting triples into question sentences using predefined templates, and the answers are crowdsourced. The dataset's development split is used as evaluation dataset, containing 1,954 questions.
CommonsenseQA (CSQA) (Talmor et al., 2019)
contains questions focused on various commonsense aspects. Each entry contains a question and five candidate answers. The questions are constructed by crowd workers. The answer candidates include distractors comprised of hand-picked ones or nodes from ConceptNet. The development set is used as evaluation set, containing 1,221 questions.
Story Cloze Test (SCT) (Mostafazadeh et al.,
2017) is a LSDSem'17 shared task, evaluating story understanding and script learning. Each entry contains a four-sentence story and two possible fifth sentences, where the model has to pick the most suitable ending for the story. The development set is used as the evaluation set, containing 1572 different stories.
| Triple | Source | Negative Triple | Generated QA Pairs Q: Chopstick located or found at |
|-------------------------------------|-----------------------------------|-------------------------------------------|-------------------------------------------------------|
| (chopstick, AtLocation, table) | ConceptNet | (bread, is created by, flour) | A: table B: flour |
| (PersonX leaves the room, | Q: PersonX wants to go to the | | |
| (PersonX wants to go to the office, | office, as a result, PersonX will | | |
| ATOMIC | | | |
| oEffect, get dressed up) | xWant, to go somewhere else) | A: get dressed up B: to go somewhere else | |
Table 9: QA pairs generated by KG Triples
## A.3 Implementation Details
Our experiments are run on a single A100 GPU
card. We use RoBERTa-Large as our backbone model. The training batch size is 196, and the maximal sequence length for training is 64. The learning rate is set to 5e-5 for all experiments. For experiments with the margin ranking loss, η is set to 1. The validation set is evaluated by accuracy and used to select a best model for further evaluation. The models are trained for 20 epochs and early stopped when the change of validation loss is within 1%.
## B Ablation Studies
We present the full results for the ablation studies discussed in Section 4.3. Table 3 for the backbone models study; Table 4 for the influence of contrastive learning; and Table 5 for accuracy.
## C Data Analysis
In the analysis of the distance to sentence embeddings, we treat each entry in the CKG datasets as possible answers and encode them using the SBERT pre-trained model (*all-mpnet-basev2*) (Reimers and Gurevych, 2019, 2020). Then, the cosine-similarity between the SIQA question and the encoded sentences is calculated to rank their semantic relatedness.
We retrieved the top 3 answers for each source and listed by similarity score at descending order. Table 10 extends the results presented in Section 4.4; Table 11 show the alternative answers from CKG datasets COPA questions.
| SIQA Example | Question: After a long grueling semester, Tracy took the final exam and finished their course today. Now they would graduate. Why did Tracy do this? Answer: complete their degree on time Because I had a deadline. This entails I had to finish by that time. The professors went home feeling that history had been made. This entails The professors returned home. They got married after his first year of law school.This entails Their marriage took place after he finished his first year of law school. |
|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| MNLI | Tracy wants finish before time expires. because Tracy takes the exam |
| ATOMIC | Tracy wanted to get a degree. as a result Tracy finishes Tracy's test Tracy graduates with a degree. but before, Tracy needed get pass with good marks. pass class causes graduation |
| ConceptNet | study ends with the event or action graduate graduation because take final exam |
Table 10: Complete results of alternative answers retrieved from MNLI, ATOMIC and ConceptNet for SIQA
question. Reasonable alternatives are in bold.
| COPA Example | Question: The boy wanted to be muscular. As a result, Answer: He lifted weights. Emboldened, the small boy proceeded. This entails the small boy felt bolder and continued. Out of shape, fat boy. This entails the boy was obese. When Sport Resort won the contract for the construction of a new hotel center for 1200 people around the Olympic Sports Arena (built as a reserve for the future, to have it ready in time for the next championships), Gonzo began to push his weight around, because he felt more secure. This entails when Sport Resort won the contract for the construction of a new hotel Gonzo felt more secure. |
|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| MNLI | John wanted to build his physique. as a result the boy lifts weights |
| ATOMIC | The boy starts working out. as a result, the boy wants to gain more muscle The boy starts lifting weights. as a result, the boy will build muscle lift could make use of muscle |
| ConceptNet | person desires strong body build muscle because exercise |
Table 11: Alternative answers from CKGs for COPA question.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section Appendix
✓ B1. Did you cite the creators of artifacts you used?
Section Appendix
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section Appendix
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section Appendix C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
an-rudinger-2023-nichelle | Nichelle and Nancy: The Influence of Demographic Attributes and Tokenization Length on First Name Biases | https://aclanthology.org/2023.acl-short.34 | Through the use of first name substitution experiments, prior research has demonstrated the tendency of social commonsense reasoning models to systematically exhibit social biases along the dimensions of race, ethnicity, and gender (An et al., 2023). Demographic attributes of first names, however, are strongly correlated with corpus frequency and tokenization length, which may influence model behavior independent of or in addition to demographic factors. In this paper, we conduct a new series of first name substitution experiments that measures the influence of these factors while controlling for the others. We find that demographic attributes of a name (race, ethnicity, and gender) and name tokenization length are both factors that systematically affect the behavior of social commonsense reasoning models. | # Nichelle And Nancy: The Influence Of Demographic Attributes And Tokenization Length On First Name Biases
Haozhe An University of Maryland, College Park [email protected] Rachel Rudinger
![0_image_0.png](0_image_0.png)
University of Maryland, College Park [email protected]
## Abstract
Through the use of first name substitution experiments, prior research has demonstrated the tendency of social commonsense reasoning models to systematically exhibit social biases along the dimensions of race, ethnicity, and gender (An et al., 2023). Demographic attributes of first names, however, are strongly correlated with corpus frequency and tokenization length, which may influence model behavior independent of or in addition to demographic factors.
In this paper, we conduct a new series of first name substitution experiments that measures the influence of these factors while controlling for the others. We find that demographic attributes of a name (race, ethnicity, and gender)
and name tokenization length are *both* factors that systematically affect the behavior of social commonsense reasoning models.
## 1 Introduction
Social science studies have shown that individuals may face race or gender discrimination based on demographic attributes inferred from names (Bertrand and Mullainathan, 2004; Conaway and Bethune, 2015; Stelter and Degner, 2018). Similarly, large language models exhibit disparate behaviors towards first names, both on the basis of demographic attributes (Wolfe and Caliskan, 2021) and prominent named entities (Shwartz et al., 2020).
Such model behavior may cause *representational* harms (Wang et al., 2022a) if names associated with socially disadvantaged groups are in turn associated with negative or stereotyped attributes, or allocational harms (Crawford, 2017) if models are deployed in real-world systems, like resume screeners (O'Neil, 2016; Blodgett et al., 2020).
The task of *social commonsense reasoning* (Sap et al., 2019; Forbes et al., 2020), in which models must reason about social norms and basic human psychology to answer questions about interpersonal situations, provides a particularly fruitful setting 388 for studying the phenomenon of name biases in NLP models. Questions in the Social IQa dataset
(Sap et al., 2019), for example, describe hypothetical social situations with named, but completely generic and interchangeable, participants (e.g. "Alice and Bob"). Social IQa questions require models to make inferences about these participants, yet they maintain the convenient property that correct
(or best) answers should be invariant to name substitutions in most or all cases.
Leveraging this invariance property, prior work (An et al., 2023) has demonstrated that social commonsense reasoning models acquire unwarranted implicit associations between names and personal attributes based on demographic factors
(Fig. 1). Building upon this finding, we investigate a natural follow-up question: *why?*
We identify two possible factors that cause a model's disparate treatment towards names: demographic attributes and tokenization length. We hypothesize that names associated with different demographic attributes, in particular race, ethnicity, and gender may cause a model to represent and treat them differently. These demographic
![1_image_0.png](1_image_0.png)
attributes are also strongly correlated with corpus frequency and tokenization length (Wolfe and Caliskan, 2021). **Tokenization** (or segmentation)
breaks down an input sentence into a series of subword tokens from a predefined vocabulary, each of which is then, typically, mapped to a word embedding as the input to a contemporary language model. A name's **tokenization length** refers to the number of subwords in the name following tokenization. In this work, we refer to *singly tokenized* and *multiply tokenized* names as those consisting of one or multiple tokens after tokenization, respectively. As a result, singly tokenized names are represented with a single embedding vector, while multiply tokenized names are represented by two or more. With these potential confounds, we attempt to address the research question: *In social* commonsense reasoning, to what extent do demographic attributes of names (race, ethnicity, and gender) and name tokenization length each have an impact on a model's treatment towards names?
We first conduct an empirical analysis to understand the distribution of tokenization lengths in names given demographic attributes, and viceversa. Adopting the open-ended bias-discovery framework, SODAPOP (An et al., 2023), we then analyze the impact of demographic attributes and tokenization length on model behavior. We find that *both* factors have a significant impact, even when controlling for the other. We conclude that due to correlations between demographics and tokenization length, systems will not behave fairly unless *both* contributing factors are addressed. Finally, we show that a naïve counterfactual data augmentation approach to mitigating name biases in this task is ineffective (as measured by SODAPOP),
concluding that name biases are primarily introduced during pre-training and that more sophisticated mitigation techniques may be required.
## 2 Demographic Attributes And Tokenization Length Are Correlated
Previously, Wolfe and Caliskan (2021) have shown that White male names occur most often in pretraining corpora, and consequently, White male names are more likely to be singly tokenized. We replicate this finding by collecting 5,748 first names for 4 races/ethnicities (White, Black, Hispanic, and Asian) and 2 genders (female and male) from a U.S. voter files dataset compiled by Rosenman et al.
(2022) (specific data processing and name inclusion criteria are in appendix B.1). We compute and plot the conditional probabilities of tokenization length given demographic attributes (race/ethnicity and gender) and vice-versa in Fig. 2 using the BERT
tokenizer (Devlin et al., 2019; Wu et al., 2016). Let ST be the event that a name is singly tokenized.
We see in Fig. 2 that P(White|ST), P(ST|White),
P(Male|ST), and P(ST|Male) are substantially higher than other conditional probabilities involving ST1, confirming Wolfe and Caliskan (2021).
These observations suggest that a model tends to represent White names and male names differently from others in terms of the tokenization length.
Given these substantial differences in tokenization lengths across demographic groups, we are motivated to investigate whether tokenization is a primary *cause* of disparate treatment of names across demographic groups. It is important to note here that, even if tokenization *were* the primary cause of disparate treatment of names across demographic groups, this discovery would not in itself resolve the fairness concerns of representational and allocational harms based on race, ethnicity and gender, but it might point to possible technical solutions.
However, as we will show in the next section, dis-1We present similar results for RoBERTa (Liu et al., 2019)
and GPT-2 (Radford et al., 2019) tokenizer (Sennrich et al.,
2015) in Fig. 6 (appendix A).
![2_image_0.png](2_image_0.png)
parate treatment of names across demographic attributes persists strongly even when controlling for tokenization length (and vice-versa).
## 3 **Analyzing The Influences Via Sodapop**
We follow SODAPOP (An et al., 2023) to investigate how the two factors in § 2 influence a Social IQa model's behavior towards names.
## 3.1 Experiment Setup
SODAPOP leverages samples from Social IQa (Sap et al., 2019), a social commonsense reasoning multiple choice questions (MCQ) dataset. Each MCQ
consists of a social context c, a question q, and three answer choices τ1, τ2, τ3, one of which is the only correct answer. An example is shown in Fig. 1.
Subgroup names For controlled experiments, we obtain at most 30 names for each subgroup categorized by the intersection of race/ethnicity, gender, and tokenization length (BERT tokenizer), resulting in a total of 686 names. Table 1 (appendix)
shows the specific breakdown for each group.
Success rate vectors Using millions of MCQ
instances, SODAPOP quantifies the associations between names and words using *success rate vectors* (SR vectors): a vector whose entries are the probability of a distractor τi containing word w to fool the model, given that name n is in the context. For illustration, out of 5,457 distractors containing the word "violent" we generated for the name "Nichelle" (Fig. 1), 183 misled the model to pick the distractor over the correct answer choice. The success rate for the word-name pair ("violent", "Nichelle") is 183 5457 = 3.28%. We present more details, including the formal mathematical definition of success rate, in appendix B.2.
Clustering of the success rate vectors The clustering of SR vectors can be visualized by tSNE
projections. To quantify the tightness of clustering between two groups of SR vectors *A, B*, we first find the centroids −→cA,−→cB by averaging 3 random SR
vectors within each group. Then, for each SR vector −→s (including the 3 random vectors for centroid computation), we assign a label a if its euclidean distance is closer to −→cA, otherwise b. We check the accuracy x of this naïve *membership prediction*.
The membership prediction accuracy on SR vectors produced by a fair model would be close to 0.5, indicating that name attributes are not easily recoverable from their corresponding SR vectors.
We evaluate the statistical significance using a variant of the permutation test. The null hypothesis is that the SR vectors of groups A and B are no more clusterable than a random re-partitioning of A ∪ B
would be. We randomly permute and partition the SR vectors into A′, B′ with the same cardinality each and relabel them. We predict the membership of SR vectors based on their distance to the new centroids −→cA
′,−→cB
′, obtaining accuracy x′. The p-value P(x′ > x) is estimated over 10, 000 runs.
## 3.2 Results: Both Factors Matter
We use the 686 names across all subgroups, almost evenly distributed by demographic attributes, and obtain the tSNE projection of their SR vectors (obtained using BERT, and the dimension is 736) in Fig 3. We observe clear clustering by tokenization length, race/ethnicity, and gender. Since tokenization length is generally correlated with corpus frequency, we also see weak clustering of the SR vectors by frequency.
We report the membership prediction accuracy of SR vectors (obtained by running SODAPOP on a finetuned BERT model for Social IQa) for all pairs of subgroups in Fig. 4a. Each cell in the figure shows the separability of SR vectors for names from two groupings. To illustrate, the top left cell shows singly tokenized White male names are highly separable (> 80%) from singly tokenized White female names; the entire heatmap shows the
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
results for all pairs. As we vary one and control the other confounding factors, we find that each of race/ethnicity, gender, and tokenization length are name attributes that lead to systematically different model behavior, as measured by membership prediction accuracy. Almost all prediction accuracy is close to 1.0, indicating perfect separation of the clusters, with p < 0.001 in nearly all settings. We see in Fig. 4a, for instance, that SR
vectors of singly tokenized Black female names and singly tokenized White female names are perfectly classified, so race is still a pertinent factor even controlling for gender and tokenization. In contrast, SR vectors for singly tokenized Asian male and Asian female names are not distinguishable, although gender appears to influence model behavior under most other controlled settings.
We obtain experimental results for RoBERTa and GPT-2 in appendix C. We observe that these additional results also demonstrate a similar trend as BERT, generally supporting the hypothesis that models exhibit disparate behavior for different names based on their demographic attributes as well as tokenization length. However, the results for RoBERTa and GPT-2 are less strong than that of BERT. We speculate a variety of reasons that could give rise to the different results among these models. One potential major cause is the different tokenization algorithms used by the models: BERT uses WordPiece (Wu et al., 2016) while RoBERTa and GPT-2 use Byte-Pair Encoding (Sennrich et al.,
2015) for tokenization. Due to this difference, the tokenization length of a name can vary in these models. For example, "Nancy" is singly tokenized in BERT but is broken down into ["N", "ancy"]
in RoBERTa or GPT-2. Beyond tokenization, the different pre-training algorithms and training corpora will also likely contribute to the slightly different observations between Fig. 4 and Fig. 10.
## 4 Counter-Factual Data Augmentation
We apply counter-factual data augmentation (CDA)
to the Social IQa training set as we attempt to finetune a model that is indifferent to both tokenization length and the demographic attributes of names.
We choose to experiment with CDA because it would shed light on the source of name biases. If biases mostly arise from finetuning, we expect finetuning on Social IQa with CDA would largely address the problem; otherwise, biases mostly originate from pre-training and are not easily overridden during finetuning.
For each Social IQa sample, we identify the original names using Stanford NER (Finkel et al.,
2005). We find that more than 99% of samples contain one or two names. We create copies of the MCQ samples and replace the identified names with random names from our sampled sub-groups such that the overall name frequency is evenly distributed over tokenization lengths and demographic attributes, resulting in an augmented set whose size increases by 16×. We finetune a BERT model 391 with the augmented set (details in appendix B.2).
However, this naïve solution is rather ineffective
(Fig. 4b). This negative result is not surprising as it aligns with the observations that SODAPOP
could detect biases even in models debiased with state-of-the-art algorithms (An et al., 2023). It also indicates that pre-training contributes to the biased model behavior. Hence, a more sophisticated solution is needed to tackle this problem.
## 5 Related Work
Social biases in language models Multiple recent works aim to detect social biases in language models (Rudinger et al., 2018; Zhao et al., 2018, 2019; Nangia et al., 2020; Li et al., 2020; Nadeem et al., 2021; Sap et al., 2020; Parrish et al., 2022).
Some works specifically diagnose biases in social commonsense reasoning (Sotnikova et al., 2021; An et al., 2023), but they do not explain what causes a model to treat different names dissimilarly; in particular, these works do not consider the influence of tokenization length on model behavior towards different names.
Name artifacts Previous research indicates that language models exhibit disparate treatments towards names, partially due to their tokenization or demographic attributes (Maudslay et al., 2019; Czarnowska et al., 2021; Wang et al., 2022b). However, thorough analyses of the factors influencing first name biases are lacking in these works. While Wolfe and Caliskan (2021) study the systematic different *internal representations* of name embeddings in language models due to the two factors, we systematically study how the two factors both connect with the disparate treatment of names by a model in a *downstream* task.
## 6 Conclusion
We have demonstrated that demographic attributes and tokenization length are *both* factors of first names that influence social commonsense model behavior. Each of the two factors has some independent influence on model behavior because when controlling one and varying the other, we observe disparate treatment of names. When controlling for tokenization length (e.g. Black male singlytokenized names vs White male singly-tokenized names) we still find disparate treatment. Conversely, when we control for demographics (e.g.
Black female singly-tokenized vs Black female triply-tokenized names), the model also treats those names differently. Because demographic attributes
(race, ethnicity, and gender) are *correlated* with tokenization length, we conclude that systems will continue to behave unfairly towards socially disadvantaged groups unless *both* contributing factors are addressed. We demonstrate the bias mitigation is challenging in this setting, with the simple method of counterfactual data augmentation unable to undo name biases acquired during pre-training.
## Limitations
Incomplete representation of all demographic groups We highlight that the names used in our study are not close to a complete representation of every demographic group in the United States or world. In our study, we adopt the definition of race/ethnicity from the US census survey, using US-centric racial and ethnic categorizations that may be less applicable in other countries. We adopt a binary model of gender (female and male),
based on the SSA dataset, which is derived from statistics on baby names and assigned sex at birth; this approach limits our ability to study chosen first names, or to study fairness with respect to nonbinary and transgender people. For race/ethnicity, our study is limited to US census categories of White, Black, Hispanic, and Asian. We are unable to include American Indian or Alaska Native in our study, for instance, as we were unable to identify any names from this group that met our inclusion criteria of > 50% membership according to our name data source.
Furthermore, by using first names as a proxy for demographic attributes, we are only able to study certain demographic attributes that plausibly correlate with names (e.g., race, ethnicity, and gender)
but not other demographic attributes that are likely harder to infer from names (e.g., ability or sexual orientation). Other demographic attributes that may be discernible to varying degrees from first names were excluded from the scope of this study (e.g.,
nationality, religion, age).
Assumption: Invariance under name substitution Invariance under name substitution, while a valuable fairness criterion for Social IQa, may not hold in all other task settings. For example, a factoid QA system should provide different answers to the questions "What year was Adam Smith born?"
(1723) and "What year was Bessie Smith born?"
(1894).
Extended evaluation time and heavy computational costs Due to the huge number of MCQ
instances we construct for evaluation and a diverse set of names to cover multiple demographic identities, it takes a considerably large amount of time and computational resources to obtain the analysis results. We detail the approximated time and computational budget in appendix B.2. However, it is worth noting that the extensive analysis on a wide range of MCQ instances and names makes our observations more statistically robust. A future research direction may be optimizing the implementation of SODAPOP framework, which we use as a major experiment setup to obtain the analysis, for more efficient evaluation.
(In)effectiveness of counter-factual data augmentation It is worth noting that the ineffective result we obtained is not surprising because SODAPOP has demonstrated that models that are trained with existing state-of-the-art debiasing algorithms continue to treat names differently (An et al., 2023). Although we find that controlling the name distribution in the finetuning dataset to be rather ineffective in mitigating the disparate treatment of names, it is an open question if applying CDA to the pre-training corpus would be more effective. A recent work proposes to apply CDA to the pre-training corpus (Qian et al., 2022), and it will likely be a great source to use for investigating our open question here.
## Ethics Statement
Potential risks Our paper contains an explicit example of demographic biases in a social commonsense reasoning model (Fig. 1). This observation does not reflect the views of the authors. The biased content is for illustration purpose only. It should not be exploited for activities that may cause physical, mental, or any form of harm to people.
The potential benefits from our work include: (1)
insights into the factors that influence a social commonsense reasoning model's behavior towards first names; (2) the potential for increased awareness of these factors to encourage more cautious deployment of real-world systems; and (3) better insights into the challenges of debiasing, and how demographic and tokenization issues will *both* need to be addressed.
Differences in self-identifications We have categorized names into subgroups of race/ethnicity and gender by consulting real-world data as we observe a strong statistical association between names and demographic attributes (race/ethnicity and gender). However, it is crucial to realize that a person with a particular name may identify themselves differently from the majority, and we should respect their individual preferences and embrace the differences. In spite of the diverse possibilities in selfidentification, our observations are still valuable because we have designed robust data inclusion criteria (detailed in appendix B.1) to ensure the statistical significance of our results.
## Acknowledgements
We thank the anonymous reviewers for their constructive feedback. We also thank Neha Srikanth, Abhilasha Sancheti, and Shramay Palta for their helpful suggestions to improve the manuscript.
## References
Haozhe An, Zongxia Li, Jieyu Zhao, and Rachel Rudinger. 2023. SODAPOP: Open-ended discovery of social biases in social commonsense reasoning models. In *Proceedings of the 17th Conference of* the European Chapter of the Association for Computational Linguistics, pages 1565–1588, Dubrovnik, Croatia. Association for Computational Linguistics.
Marianne Bertrand and Sendhil Mullainathan. 2004.
Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination. *American economic review*, 94(4):991–1013.
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5454–
5476, Online. Association for Computational Linguistics.
Wendy Conaway and Sonja Bethune. 2015. Implicit bias and first name stereotypes: What are the implications for online instruction?. *Online Learning*,
19(3):162–178.
Kate Crawford. 2017. The trouble with bias. NeurIPS.
Paula Czarnowska, Yogarshi Vyas, and Kashif Shah.
2021. Quantifying social biases in NLP: A generalization and empirical comparison of extrinsic fairness metrics. *Transactions of the Association for Computational Linguistics*, 9:1249–1267.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics
(ACL'05), pages 363–370, Ann Arbor, Michigan. Association for Computational Linguistics.
Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 653–670, Online. Association for Computational Linguistics.
Tao Li, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Vivek Srikumar. 2020. UNQOVERing stereotyping biases via underspecified questions. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3475–3489, Online.
Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5267–5275, Hong Kong, China. Association for Computational Linguistics.
Moin Nadeem, Anna Bethke, and Siva Reddy. 2021.
StereoSet: Measuring stereotypical bias in pretrained language models. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics.
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics.
Cathy O'Neil. 2016. *Weapons of Math Destruction:*
How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, USA.
Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ:
A hand-built bias benchmark for question answering.
In Findings of the Association for Computational Linguistics: ACL 2022, pages 2086–2105, Dublin, Ireland. Association for Computational Linguistics.
Rebecca Qian, Candace Ross, Jude Fernandes, Eric Michael Smith, Douwe Kiela, and Adina Williams. 2022. Perturbation augmentation for fairer NLP. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9496–9521, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Evan TR Rosenman, Santiago Olivella, and Kosuke Imai. 2022. Race and ethnicity data for first, middle, and last names. *arXiv preprint arXiv:2208.12443*.
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers),
pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics.
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5477–5490, Online. Association for Computational Linguistics.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463–
4473, Hong Kong, China. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2015. Neural machine translation of rare words with subword units. *arXiv preprint arXiv:1508.07909*.
Vered Shwartz, Rachel Rudinger, and Oyvind Tafjord.
2020. "you are grounded!": Latent name artifacts in pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6850–6861, Online. Association for Computational Linguistics.
Anna Sotnikova, Yang Trista Cao, Hal Daumé III, and Rachel Rudinger. 2021. Analyzing stereotypes in
generative text inference tasks. In *Findings of the* Association for Computational Linguistics: ACLIJCNLP 2021, pages 4052–4065, Online. Association for Computational Linguistics.
Marleen Stelter and Juliane Degner. 2018. Recognizing emily and latisha: Inconsistent effects of name stereotypicality on the other-race effect. Frontiers in psychology, 9:486.
Angelina Wang, Solon Barocas, Kristen Laird, and Hanna Wallach. 2022a. Measuring representational harms in image captioning. In *2022 ACM Conference on Fairness, Accountability, and Transparency*,
FAccT '22, page 324–335, New York, NY, USA. Association for Computing Machinery.
Jun Wang, Benjamin Rubinstein, and Trevor Cohn.
2022b. Measuring and mitigating name biases in neural machine translation. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2576–2590, Dublin, Ireland. Association for Computational Linguistics.
Robert Wolfe and Aylin Caliskan. 2021. Low frequency names exhibit bias and overfitting in contextualizing language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 518–532, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system:
Bridging the gap between human and machine translation. *arXiv preprint arXiv:1609.08144*.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies
and reading books. In *Proceedings of the IEEE international conference on computer vision*, pages 19–27.
## A Additional Analysis On Frequency, Tokenization, And Demographic Attributes Of Names
We provide the complementary plots for Fig. 2 by showing the raw counts of the names in Fig. 5.
We also present preliminary observations on the connection between frequency, tokenization, and demographic attributes of names for RoBERTa and GPT-2 tokenizer in this section. Theses results
(Fig. 6) are similar to those in § 2. White male names are more likely to be singly tokenized in RoBERTa or GPT-2 as well. We observe that the conditional probability that a name is singly tokenized given that it is Asian is also quite high. We speculate the reason for this is that Asian names have fewer characters in their first names on average (4.40) compared to that of Black names (6.48)
and Hispanic names (6.41), which cause Asian names to be more likely singly tokenized as well.
In addition, we count the occurrence of 608 names (a subset of the 5,748 names in § 2) in Wikipedia2and BooksCorpus (Zhu et al., 2015),
which are used to pre-train BERT and RoBERTa.
Fig. 7 illustrates the distribution of name frequency over different tokenization lengths. We see that, regardless of the model, most singly tokenized names have higher average frequency, whereas multiply tokenized names share similar distributions with lower frequency overall.
## B Detailed Experiment Setup B.1 Experiments For Preliminary Observations
Names We collect people's first names from a U.S. voter files dataset compiled by Rosenman et al.
(2022). We filter out names whose frequency in the dataset is less than 200. Since each name is not strictly associated with a single race/ethnicity, but rather reflects a distribution over races/ethnicities, we analyze only names for which the percentage of people with that name identifying as that race/ethnicity is above 50%. We assign a binary gender label to each name by cross-referencing gender statistics in the SSA dataset.3If the name is absent from the SSA dataset, we omit that name.
2https://huggingface.co/datasets/wikipedia 3https://www.ssa.gov/oact/babynames/
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![8_image_2.png](8_image_2.png)
With these constraints, there is only one name for the category "Other race/ethnicity". For robust statistical analysis, we choose not to include this category but only the other four categories in the data source, which are White, Black, Hispanic, and Asian. There is a total of 5,748 names.
Models We use three popular language models for the analysis, namely BERT (Devlin et al., 2019),
RoBERTa (Liu et al., 2019), and GPT-2 (Radford et al., 2019). BERT uses WordPiece (Wu et al., 2016) for tokenization, while both RoBERTa and GPT-2 use Byte-Pair Encoding (Sennrich et al.,
2015) as their tokenization algorithm. BERT-base has 110 million parameters. RoBERTa-base has 123 million parameters. GPT-2 has 1.5 billion parameters. No finetuning is needed for experiments in § 2 because tokenization of input is invariant to
![8_image_3.png](8_image_3.png)
further finetuning in a downstream task.
## B.2 Experiments With Sodapop
Social IQa To examine machine intelligence in everyday situations, Sap et al. (2019) publish a social commonsense reasoning multiple-choice
| BERT Tokenizer | | | | | | |
|-------------------------|------|--------|----|----|----|----|
| Gender | Male | Female | | | | |
| Tokenization length | 1 | 2 | 3 | 1 | 2 | 3 |
| White | 30 | 30 | 30 | 30 | 30 | 30 |
| Black | 30 | 30 | 30 | 30 | 30 | 30 |
| Hispanic | 30 | 30 | 30 | 30 | 30 | 30 |
| Asian | 30 | 30 | 7 | 30 | 30 | 19 |
| RoBERTa/GPT-2 Tokenizer | | | | | | |
| Gender | Male | Female | | | | |
| Tokenization length | 1 | 2 | 3 | 1 | 2 | 3 |
| White | 30 | 30 | 30 | 30 | 30 | 30 |
| Black | 24 | 30 | 30 | 12 | 30 | 30 |
| Hispanic | 9 | 30 | 30 | 8 | 30 | 30 |
| Asian | 23 | 30 | 21 | 10 | 30 | 21 |
dataset Social IQa. Each MCQ consists of a social context, a question, and three answer choices, one of which is the only correct answer. An example from Social IQa is *Context:* "Kai made a wish and truly believed that it would come true."
Q: "How would you describe Kai?" A1: "a cynical person" A2: "like a wishful person" A3: "a believing person" (correct choice). There are 33, 410 samples in the training set and 1, 954 instances in the development set.
Generating distractors To detect a model's disparate treatment towards names, SODAPOP substitutes the name in a MCQ sample with names associated with different races/ethnicities and genders, and generate a huge number of new distractors to robustly test what makes a distractor more likely to fool the MCQ model, thus finding the model's implicit associations between names and attributes.
We follow the same algorithm proposed by An et al.
(2023) to generate distractors using a masked-token prediction model (RoBERTa-base). We generate distractors from the correct choice of 50 MCQ samples in Social IQa (Sap et al., 2019). We utilize the same list of names for distractor generation as in SODAPOP. In our study, we take the union of all the distractors generated with different names for a context to form new MCQ samples for more robust results. The total number of MCQ constructed via this step is 4,840,776.
Success rate Recall that each MCQ in Social IQa consists of a social context c, a question q, and three answer choices τ1, τ2, τ3, one of which is the only correct answer. Formally, for an arbitrary distractor τi, the success rate of a word-name pair (*w, n*) is
$$SR(w,n)=P{\bigg(}{\underset{j\in\{1,2,3\}}{\operatorname{arg\,max}}}{\mathcal{M}}(c,q,\tau_{j})=i{\bigg|}$$ $${\big(}w\in{\mathsf{tok}}(\tau_{i}){\big)}\wedge{\big(}n\in{\mathsf{tok}}(c){\big)}{\bigg)},\tag{1}$$
where M(*c, q, τ*j ) produces the logit for answer choice τj using a MCQ model M, and tok splits the input by space so as to tokenize it into a bag of words and punctuation. A **success rate vector** for a name n composes |V | entries of SR(*w, n*) for all w ∈ V , where V is the set of vocabulary (i.e.,
words appearing in all distractors above a certain threshold). Specifically, we set the threshold to be 1, 000 in our experiments.
Models We conduct experiments using three popular language models, namely BERT (Devlin et al.,
2019), RoBERTa (Liu et al., 2019), and GPT2 (Radford et al., 2019). The size of each model is specified in appendix B.1. We finetune each model on the Social IQa training set with a grid search for hyperparameters (batch size = {3, 4, 8}, learning rate = {1e−5, 2e−5, 3e−5}, epoch = {2, 4, 10}). Although different hyper-parameters lead to varying final performance on the development set of Social IQa, we find them to be within a small range in most cases (within 1% − 2%). Since our analysis does not highly depend on the performance of a model, we arbitrarily analyze a model that has a decent validation accuracy among all. In our study, the BERT-base model is finetuned with batch size 3, learning rate 2e−5for 2 epochs and achieves 60.51% on the original dev set. The RoBERTabase model is finetuned with batch size 8, learning rate 1e−5for 4 epochs and achieves 70.51% on the original dev set. The GPT-2 model is finetuned with batch size 4, learning rate 2e−5for 4 epochs and achieves 61.91% on the original dev set. To finetune on the counter-factually augmented dataset, we conduct grid search for batch size = {2, 3, 8},
learning rate = {1e−5, 2e−5} for 1 epoch. We obtain similar dev set accuracy for these setting, all about 60%.
The evaluation time for 4 million MCQs across more than 600 names is costly. We approximate that it takes about 7 days using 30 GPUs (a combination of NVIDIA RTX A4000 and NVIDIA
TITAN X) for each model. However, we note that a smaller number of MCQ instances and names may sufficiently capture the biased behavior of a model. We choose to include an extremely large number of test instances and a wide range of names to ensure the robustness of our study. Although important, it is out of the scope of this paper to find the optimal size of the bias-discovery test set to minimize computation time and resources.
Subgroup names For fine-grained analysis that compares a model's different behavior towards two name groups that only vary by one confounding factor, we compile subgroups of names that share the same race/ethnicity, gender, and tokenization length. For example, White female names with tokenization length 2 is one subgroup of names.
In total, we sample 686 names for BERT and 608 names for RoBERTa and GPT-2. Table. 1 shows the specific number of names in each subgroup. Given the data source available to us, we are unable to collect an enough number of names for certain subgroups (e.g., Asian male names with tokenization length 3). Nonetheless, these limitations do not affect our findings of the different treatment towards other subgroups with a sufficiently large number of names.
## C Additional Experiment Results
We illustrate the tSNE projections of SR vectors for RoBERTa and GPT-2 in Fig. 8 and Fig. 9 respectively. The dimension of the SR vectors is 660 for these two models. The plots show that, as we control each of the factors in our analysis, both RoBERTa and GPT-2 treat names differently in the downstream task of social commonsense reasoning.
We also report the membership prediction accuracy for RoBERTa and GPT-2 in Fig. 10. We observe that gender, race/ethnicity, and tokenization length are all strongly correlated with the model's disparate treatment of names in these models as well. GPT-2 behaves similarly as BERT, where tokenization length, race/ethnicity, and gender are all factors that indicate the model's different behavior towards names.
## D Responsible Nlp
Licenses We have used BERT, RoBERTa, and GPT-2 for our empirical studies. BERT uses Apache License Version 2.0,4and both RoBERTa and GPT-2 use MIT License.5 We are granted permission to use and modify these models for our experiments per these licenses.
We also use Stanford NER in our experiments, which is under GNU General Public License (V2 or later).6 The pipeline SODAPOP is under AttributionNonCommercial-NoDerivatives 4.0 International
(CC BY-NC-ND 4.0).7 We have the permission to copy and redistribute the material in any medium or format.
The dataset Social IQa is under Creative Commons Attribution 4.0 International License8as it was published by Association for Computational Linguistics. Per the license, we may "copy and redistribute the material in any medium or format" and "remix, transform, and build upon the material for any purpose, even commercially."
The first name dataset (Rosenman et al., 2022) is under CC0 1.0 Universal (CC0 1.0) Public Domain Dedication.9 Everyone can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission.
Consistency with the intended use of all artifacts We declare that the use of all models, datasets, or scientific artifacts in this paper aligns with their intended use.
![11_image_0.png](11_image_0.png)
Frequency
![11_image_1.png](11_image_1.png)
Frequency
![11_image_2.png](11_image_2.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
page 5 first section "Limitations"
✓ A2. Did you discuss any potential risks of your work?
page 5 second section "Ethics statement"
✓ A3. Do the abstract and introduction summarize the paper's main claims?
page 1 "abstract" and section 1 "Introduction"
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 2, 3, And 4
✓ B1. Did you cite the creators of artifacts you used?
Sections 2, 3, and 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix
## C ✓ **Did You Run Computational Experiments?** Sections 2, 3, And 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ma-etal-2023-improving | Improving Syntactic Probing Correctness and Robustness with Control Tasks | https://aclanthology.org/2023.acl-short.35 | Syntactic probing methods have been used to examine whether and how pre-trained language models (PLMs) encode syntactic features. However, the probing methods are usually biased by the PLMs{'} memorization of common word co-occurrences, even if they do not form syntactic relations. This paper presents a random-word-substitution and random-label-matching control task to reduce these biases and improve the robustness of syntactic probing methods. Our control tasks are also shown to notably improve the consistency of probing results between different probing methods and make the methods more robust with respect to the text attributes of the probing instances. Our control tasks make syntactic probing methods better at reconstructing syntactic features and more generalizable to unseen text domains. Our experiments show that our proposed control tasks are effective on different PLMs, probing methods, and syntactic features. | # Improving Syntactic Probing Correctness And Robustness With Control Tasks
Weicheng Ma1, Brian Wang2, Hefan Zhang2, **Lili Wang**2, Rolando Coto-Solano3, **Saeed Hassanpour**4, and **Soroush Vosoughi**5 1,2,5Department of Computer Science, Dartmouth College 3Department of Linguistics, Dartmouth College 4Department of Biomedical Data Science, Dartmouth College [email protected] [email protected] Abstract Syntactic probing methods have been used to examine whether and how pre-trained language models (PLMs) encode syntactic relations. However, the probing methods are usually biased by the PLMs' memorization of common word co-occurrences, even if they do not form syntactic relations. This paper presents a random-word-substitution and random-labelmatching control task to reduce these biases and improve the robustness of syntactic probing methods. Our control tasks are also shown to notably improve the consistency of probing results between different probing methods and make the methods more robust with respect to the text attributes of the probing instances. Our control tasks make syntactic probing methods better at reconstructing syntactic relations and more generalizable to unseen text domains. Our experiments show that our proposed control tasks are effective on different PLMs, probing methods, and syntactic relations.
## 1 Introduction
To explain the high performance of PLMs on various natural language processing (NLP) tasks, efforts have been made to examine the syntactic relation-encoding ability of these models. For example, Manning et al. (2020) attempt to reconstruct syntactic relations from the attention heads of Transformer models (Vaswani et al., 2017) using raw attention scores. Leave-one-out probing methods (Brunner et al., 2020), instead, measure the influence of ablating parts of each syntactic relation on the hidden representations of the models.
However, the probing results may not faithfully reflect the encoding of syntactic relations as the memorization of common word co-occurrences in the training data of PLMs can lead to incorrect and non-generalizable probing results (Hewitt and Liang, 2019). We observe the same issues in our experiments, where many highly-ranked attention Positive:
Inuential members of the House Ways and Means Committee introduced legislation that would restrict how the new savings-and-loan bailout agency can raise capital, creating another potential obstacle to the government's sale of sick thrifts.
## Random-Word-Substitution:
Inuential members of the House Ways and Means Committee introduction legislation that would restrict how the new savings-and-loan bailout agency can raise capital, creating another potential obstacle to the government's sale of sick thrifts.
## Random-Label-Matching:
Inuential members of the House Ways and Means Committee introduced legislation that would restrict how the new savings-and-loan bailout agency can raise capital, creating another potential obstacle to the government's sale of sick thrifts.
Figure 1: Top: An instance labeled with the correct
"subject" dependency relation (Positive); Middle: the instance generated by Random-Word-Substitution where the instance is labeled with the correct pair of words but incorrect word form; Bottom: the instance generated by Random-Label-Matching where the instance is labeled with an incorrect pair of words. The head verb is in blue and the dependent is in red for all the examples.
heads by the attention-as-classifier and leave-oneout probing methods highlight frequent word pairs regardless of whether there is a syntactic relation between them. This reduces the trustworthiness of the probing methods and any model interpretation that relies on them. To address this issue and improve the correctness, robustness, and generalizability of existing probing methods, we design two control tasks to reduce the adverse effects of the PLMs' memorization of word co-occurrences.
The **random-word-substitution control task** substitutes one component word (i.e., the head or dependent words) of each syntactic relation with its other forms to make the text ungrammatical. The random-label-matching control task randomly 402 matches one component word of each syntactic relation with a random irrelevant word in the sentence to make the syntactic-relation labels incorrect.
Figure 1 shows examples for each control task. The control instances (i.e., negative instances) are generated automatically by substituting words or labels of instances in the positive datasets.
By down-weighting the attention heads that are ranked highly by the probing methods on the control tasks, we observe notably more consistent probing results between the attention-as-classifier and leave-one-out methods on the BERT (Devlin et al.,
2019) and RoBERTa (Liu et al., 2019) models, with improvements above 0.1 for the Spearman's rank correlation coefficients (Spearman's ρ). 1.
The layer-wise distributions of top-ranked attention heads also become notably more consistent across different text attributes of the probing instances.
The results demonstrate the effectiveness of our proposed control tasks for improving the quality and robustness of syntactic probing methods.
## 2 Syntactic Probing Methods
Different families of probing methods rely on different assumptions (Belinkov and Glass, 2019) and as such, probing results from different families cannot be meaningfully compared. Hence, we examine two probing methods that are both based on attention distributions: (1) Given a sentence and a headword for a syntactic relation, the **attentionas-classifier** method (Manning et al., 2020) predicts another word as the dependent if it puts the highest attention score on the headword; (2) As an attention-based version of the **leave-one-out** probing method used by Meister et al. (2021), we mask the headword of each syntactic relation for each sentence and predict the word whose attention distribution changes the most as the dependent word.
Following Kobayashi et al. (2020), we additionally examine two variant methods, **norm-as-classifier** and **leave-one-out-norm** methods which predict the dependent words based on the distributions or changes of attention norms, respectively. We calculate the importance of each attention head for encoding each syntactic relation by evaluating the top3 accuracy (ACC@3) of the predictions; defined as the percentage of instances where the dependent words from the ground truth are ranked among the top-3 in the predictions. We use ACC@3 since in many cases, the highest attention scores fall on separator tokens such as "[SEP]" and punctuation marks (Clark et al., 2019a).
## 3 Probing Datasets
We use the "subject" (subj), "object" (obj), "nominal modifier" (nmod), "adverbial modifier" (advmod), and "coreference" (coref) relations in our analyses. We use the English dataset for the CoNLL-2009 shared task (Hajic et al. ˇ , 2009) to construct our positive and control probing datasets.
Figure 1 shows an example instance from the positive dataset and each control dataset.
## 3.1 Positive Datasets
Our positive dataset for each syntactic relation contains the correct annotations of words that make up the syntactic relation, e.g., the subject words and the corresponding verbs for "subj". The goldstandard dependency annotations in the CoNLL2009 dataset are used for the "subj", "obj", "nmod",
and "advmod" relations and the SpanBERT model
(Joshi et al., 2020) is used to annotate the "coref" relation 2.
## 3.2 Random-Word-Substitution Control
If an attention head in a Transformer model encodes a specific syntactic relation, it should not highlight the connections between words that do not form that syntactic relation. To measure and control for this effect, we construct the randomword-substitution control dataset by substituting one component word of the syntactic relation in each instance of the positive datasets with another part of speech of the same word (e.g., changing a verb to its noun form) to make the instance ungrammatical but not greatly change its semantics. We use the Language Tool 3, a grammar correction tool, to verify that the sentences become ungrammatical after word substitution.
## 3.3 Random-Label-Matching Control
We also extend the existing method of the random control task (Hewitt and Liang, 2019) to construct the random-label-matching control dataset. Specifically, for each instance in our positive datasets, we use our gold-standard labels and coreference labels generated by SpanBERT to remove word pairs that 2SpanBERT achieves an F1 score of 79.60% on the Ontonotes v5.0 coreference dataset (Pradhan et al., 2012).
3https://languagetool.org/
are syntactically related, leaving us with words that are not syntactically related. These words are then used to create syntactically unrelated pairs by combining known head words with randomly selected dependent words. We then (intentionally) mislabel each pair as forming a specific syntactic relation, depending on the positive dataset from which the instance was taken. Attention heads that encode the relations between these syntactically unrelated word pairs are likely memorizing the co-occurrence of frequent word pairs without regard to syntactic correctness and thus should not be ranked highly by syntactic probing methods.
## 4 Experimental Results
We conduct three sets of experiments to examine our probing methods' sensitivity to "spurious" word correlations (Section 4.1), consistency (Section 4.2), and robustness to text attributes (Section 4.3). We run the experiments using the BERT-base and RoBERTa-base models for generality. All the experiments are run on an Nvidia RTX-6000 GPU.
## 4.1 Syntactic Relation Reconstruction
We follow Manning et al. (2020) to evaluate the correctness of attention-head rankings produced by the probing methods via syntactic relation reconstruction experiments. Specifically, for a given headword, we use the attention scores (for attentionas-classifier) or norms (for norm-as-classifier) between that headword and all other words in the instance to predict the dependent word. Similarly, We use the distribution changes of the attention scores (for leave-one-out) or norms (for leave-oneout-norm) when the headword is masked to predict the dependent word. Contributive attention heads for encoding a particular syntactic relation should achieve high syntactic-relation reconstruction performance (in ACC@3) given syntactically correct
(positive) labels and low performance given incorrect (negative/control) labels.
We use the left-out development set of the CoNLL-2009 dataset (labeled using the groundtruth annotations and SpanBERT) as one positive probing dataset (pos-main) and the corresponding random-word-substitution and random-labelmatching control instances as two negative datasets.
We construct an additional positive probing dataset
(pos-uncommon) by substituting the dependent words with other words that have the same part of speech but rarely co-occur (<5 times) with the corresponding headwords in the English Wikipedia corpus 4. This dataset enables us to study the effect of co-occurrence for syntactically related pairs of words on the syntactic relation reconstruction task.
We use the English Wikipedia corpus as it is representative of the data used to pre-train BERT and RoBERTa. All the evaluations are conducted on the top-5 attention heads according to each probing method (with and without control tasks), and the scores are averaged across syntactic relations and heads.
Results show that applying our proposed control tasks does not harm the syntactic-relation reconstruction performance of the four probing methods on the pos-main dataset. In contrast, applying the random control task (Hewitt and Liang, 2019) occasionally leads to a performance drop of 1.32. This suggests that our proposed control tasks are more robust than the existing random control task. On the pos-uncommon dataset, our proposed control tasks lead to an average increase of 9.17±0.13 (BERT) and 4.07±0.15 (RoBERTa) in the syntactic-relation reconstruction performance.
Additionally, the control tasks on average reduce the incorrect prediction of syntactic relations in our two negative datasets by 11.70 ± 0.09 (BERT) and 12.69 ± 0.06 (RoBERTa). These results suggest that our proposed control tasks can reduce the influence of the PLMs' memorization of syntacticallyirrelevant word co-occurrences for encoding syntactic relations. The complete results of these experiments are shown in Appendix A.
## 4.2 Consistency Of Attention-Head Rankings
We also observe that our control tasks lead to higher consistency between the two categories of probing methods. Without any control task, the Spearman's ρ between the head rankings produced by the four probing methods are always lower than 0.38 (for BERT) and 0.49 (for RoBERTa), while applying the control tasks improves the consistency from a minimum of 0.10 to 0.79 (for BERT) and 0.14 to 0.53 (for RoBERTa), in Spearman's ρ. Furthermore, the highest consistency improvements are achieved when applying both our random-wordsubstitution and random-label-matching control tasks. Applying the random control task independently or jointly with our two control tasks does not lead to higher consistency improvements. The complete results of these experiments are shown in 4https://dumps.wikimedia.org
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
## Appendix B.
Prior work has shown that only a small focused set of heads contributes to the encoding of each linguistic feature (Michel et al., 2019; Voita et al.,
2019), and as such, a good probing method should highlight these select contributive heads. Figure 2 shows the percentage of attention heads in common among the top-k heads (1 ≤ k ≤ 144) between each pair of probing methods, either with or without control tasks. We find that applying the control tasks generally improves the agreement between attention-head rankings, with the effect being more pronounced for the top 15% of the heads, i.e., the attention heads that are deemed the most important for encoding each syntactic rule. These results show that our control tasks aid the probing methods in highlighting the small set of contributive heads.
## 4.3 Robustness To Text Attributes
The literature suggests that most contributive attention heads for encoding syntactic relations lie on the middle layers of Transformer models (Hewitt and Manning, 2019; Vig and Belinkov, 2019; Goldberg, 2019; Jawahar et al., 2019; Clark et al.,
2019b). Consequently, the layer-wise distribution of the attention heads ranked highly by a robust syntactic probing method should follow a similar pattern and not be greatly affected by the variation in the text attributes.
We divide the pos-main dataset into nine subsets with different sentence lengths (< 20 tokens, 20 − 30 tokens, and > 30 tokens), numbers of clauses (1, 2, and > 2 clauses), and distances between the head and dependent words (1 − 2 tokens, 3 − 5 tokens, and > 5 tokens). The parameters for each of the attributes were selected to create a relatively uniform distribution of sentences for each of the datasets for a given attribute. We repeat all the experiments with the attention-as-classifier and leave-one-out probing methods on these nine datasets. The layer-wise distributions of top-5 attention heads for each probing method (aggregated for the five syntactic relations) are shown in Figure 3. We show the results for the two probing methods with both our combined control tasks and without any control.
![4_image_0.png](4_image_0.png)
We note that the overall trend (represented by the blue line in each figure) shows that the topranked attention heads are over-represented on the middle layers, either with or without control tasks.
This is well-aligned with the literature, suggesting that the most contributive attention heads for encoding syntactic relations (i.e., middle layers)
are identified by the probing methods even without any control tasks (Hewitt and Manning, 2019; Vig and Belinkov, 2019; Goldberg, 2019; Jawahar et al., 2019). However, the probing methods without control tasks also put high weights on the low-level layers (below Layer 2) more frequently than those with control tasks. We speculate the cause to be the sensitivity of the probing methods
(without control tasks) to the memorization of common word co-occurrences on each attention head; since the lower-layer attention heads are closer to the embedding layer, they usually encode richer lexical features (Limisiewicz and Marecek ˇ , 2021).
Our claim is further supported by the observation that there is greater variation in the attention-head rankings between the individual probing results for each of the nine attributes when no control is used.
This can be visually observed in Figure 3 by comparing the deviation between different colored bars
(corresponding to different attributes) on the left and right figures, corresponding probing without and with controls, respectively. We additionally measure this difference in variation quantitatively by examining the consistency of the attention-head rankings over the entire 144 heads for individual probing results for each of the nine attributes. The Spearman's ρ of the rankings between all settings
(i.e., using the entire development set or any of the nine subsets) range from 0.75 to 0.96 when using the combination of the random-word-substitution and random-label-matching control tasks. In comparison, Spearman's ρ of the rankings between the settings drops to 0.22 and 0.38 when no control task is applied and between 0.51 and 0.60 when the random control task is used. These experiments suggest that our proposed control tasks can improve syntactic probing methods' robustness and reduce syntactic probing methods' fragility to the models' memorization of common word co-occurrences.
## 5 Conclusion And Future Work
This paper proposes two control tasks to improve the syntactic probing of PLMs and reduce the noise in the probing results of the PLMs' memorization of common word co-occurrences. By applying these control tasks, we observe notable improvements in the correctness and consistency of the results produced by four attention-based probing methods across two categories of five diverse syntactic relations. The improvements are also robust to different PLMs' and attributes of the probing instances, suggesting the general applicability of our proposed control tasks.
Future work can expand the use of our proposed control tasks to other models or syntactic relations.
## Acknowledgement
This work was partially funded by Dr. Vosoughi's 2022 Google Research Award.
## Limitations
While our study provides promising results in reducing biases and improving the robustness of syntactic probing methods, there are some limitations that must be discussed:
First, our experiments only utilized attentionbased probing approaches, and it is unclear whether our results would generalize to other families of probing methods. Therefore, further investigation is needed to determine the effectiveness of our control tasks for other types of probing methods. Second, we only explored a subset of syntactic relations in English, including subject, object, nominal modifier, adverbial modifier, and coreference. Our results may not be generalizable to other syntactic relations or languages. Future studies could expand the exploration of other syntactic features and investigate the effectiveness of our control tasks in different languages. Third, our experiments only focused on two pre-trained language models, namely BERT and RoBERTa. It is unclear whether our control tasks would be effective for other types of PLMs, and further studies could investigate the effectiveness of our control tasks on other types of PLMs. Finally, our study only focused on syntactic probing methods and did not investigate probing methods for other types of NLP tasks, such as natural language inference, machine translation, and summarization. Therefore, further studies could explore the effectiveness of our control tasks on other types of NLP tasks.
Despite these limitations, our proposed control tasks have shown promising results in reducing biases and improving the robustness of syntactic probing methods, and we hope that our work will inspire further research in this direction.
## Ethics Statement
This paper used publicly available pre-trained models (bert-base-cased and roberta-base models and the SpanBERT model) and a publicly available dataset (CoNLL-2009). No sensitive information is introduced to the data annotations or experiments.
Also, we only examine the ways pre-trained language models encode general syntactic relations, which should not introduce stereotypes or biases into our results and analyses. We do not foresee any potential ethical concerns in our work. However, we should note that our work is limited to English syntactic relations and should not be generalized to other languages without additional experiments.
## References
Haldun Akoglu. 2018. User's guide to correlation coefficients. *Turkish journal of emergency medicine*,
18(3):91–93.
Yonatan Belinkov and James Glass. 2019. Analysis Methods in Neural Language Processing: A Survey.
Transactions of the Association for Computational Linguistics, 7:49–72.
Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. 2020. On identifiability in transformers. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019a. What does BERT
look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP:
Analyzing and Interpreting Neural Networks for NLP,
pages 276–286, Florence, Italy. Association for Computational Linguistics.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019b. What does BERT
look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP:
Analyzing and Interpreting Neural Networks for NLP,
pages 276–286, Florence, Italy. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Yoav Goldberg. 2019. Assessing bert's syntactic abilities. *arXiv preprint arXiv:1901.05287*.
Jan Hajic, Massimiliano Ciaramita, Richard Johans- ˇ
son, Daisuke Kawahara, Maria Antònia Martí, Lluís Màrquez, Adam Meyers, Joakim Nivre, Sebastian Padó, Jan Štepánek, Pavel Stra ˇ nák, Mihai Surdeanu, ˇ
Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 1–18, Boulder, Colorado. Association for Computational Linguistics.
John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 2733–2743, Hong Kong, China. Association for Computational Linguistics.
John Hewitt and Christopher D. Manning. 2019. A
structural probe for finding syntax in word representations. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics.
Ganesh Jawahar, Benoît Sagot, and Djamé Seddah.
2019. What does BERT learn about the structure of language? In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 3651–3657, Florence, Italy. Association for Computational Linguistics.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. *Transactions of the Association for* Computational Linguistics, 8:64–77.
Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight:
Analyzing transformers with vector norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7057–7075, Online. Association for Computational Linguistics.
Tomasz Limisiewicz and David Marecek. 2021. ˇ Introducing orthogonal constraint in structural probes. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 428–442, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Christopher D Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy. 2020. Emergent linguistic structure in artificial neural networks trained by self-supervision. *Proceedings of the National Academy of Sciences*, 117(48):30046–30054.
Clara Meister, Stefan Lazov, Isabelle Augenstein, and Ryan Cotterell. 2021. Is sparse attention more interpretable? In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers),
pages 122–129, Online. Association for Computational Linguistics.
Paul Michel, Omer Levy, and Graham Neubig. 2019.
Are sixteen heads really better than one? In Advances in Neural Information Processing Systems 32:
Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 14014–14024.
Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In *Joint Conference on* EMNLP and CoNLL - Shared Task, pages 1–40, Jeju Island, Korea. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63–76, Florence, Italy. Association for Computational Linguistics.
Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy.
Association for Computational Linguistics.
## A Syntactic Relation Reconstruction Results
We display in Tables A1 - A4 the average syntactic relation reconstruction performance on the top-5 attention heads produced by each probing method for the five syntactic relations ("subj",
"obj", "nmod", "advmod", and "coref") on the posmain, pos-uncommon, random-word-substitution, and random-label-matching datasets, respectively.
## B The Inconsistency Across Probing Methods
The attention-head rankings produced by different probing methods are inconsistent when no control task is applied. As Figure B1 shows, the Spearman's ρ between each pair of probing methods are always lower than 0.38 for BERT and below 0.49 for RoBERTa, which falls under the "weak to moderate correlation" range given the interpretation of Akoglu (2018). As shown in Figures B2 and B3, by applying the random-word-substitution or the random-label-matching control tasks, Spearman's ρ across probing methods improve greatly, in some cases yielding Spearman's ρ above 0.7
(i.e., "very strong" correlations). Though not as effective as our proposed control tasks, applying the random control task also improves the consistencies of attention-head rankings across the probing methods.
As shown in the figures, combining our two control tasks generates the most consistent results for all four probing methods.
BERT
Control CLS CLS-N LOO LOO-N
None 54.01
(0.10)
56.90
(0.11)
51.73
(0.13)
46.50
(0.47)
RAND 53.91
(0.10)
57.57
(0.12)
52.61
(0.14)
47.61
(0.45)
RWS 54.50
(0.11)
57.72
(0.10)
52.82
(0.11)
47.71
(0.12)
RLM 54.88
(0.10)
57.86
(0.11)
52.68
(0.12)
47.91
(0.15)
RWS+RAND 54.89
(0.11)
57.79
(0.10)
52.77
(0.10)
47.66
(0.17)
RLM+RAND 54.46
(0.11)
57.80
(0.10)
52.60
(0.09)
47.91
(0.18)
RWS+RLM **54.99**
(0.10)
58.00
(0.10)
52.98
(0.13)
48.06
(0.14)
ALL 54.88
(0.11)
57.84
(0.11)
52.95
(0.14)
47.99
(0.17)
RoBERTa
Control CLS CLS-N LOO LOO-N
None 55.03
(0.13)
58.33
(0.11)
56.69
(0.11)
57.79
(0.09)
RAND 55.50
(0.12)
59.15
(0.13)
57.93
(0.10)
56.47
(0.17)
RWS 56.34
(0.10)
60.17
(0.12)
58.17
(0.10)
58.19
(0.09)
RLM 56.37
(0.09)
60.18
(0.11)
58.13
(0.12)
58.34
(0.08)
RWS+RAND 55.65
(0.14)
60.02
(0.12)
58.48
(0.11)
58.03
(0.11)
RLM+RAND 55.70
(0.13)
60.51
(0.14)
58.44
(0.13)
58.28
(0.10)
RWS+RLM **56.89**
(0.10)
60.83
(0.11)
58.74
(0.10)
58.95
(0.08)
ALL 56.39
(0.13)
60.83
(0.14)
58.53
(0.12)
58.82
(0.10)
| BERT | | | | |
|---------|--------|--------|--------|-------|
| Control | CLS | CLS-N | LOO | LOO-N |
| 40.27 | 54.40 | 58.68 | | |
| (0.13) | (0.10) | (0.17) | (0.18) | |
| 45.26 | 54.43 | 59.05 | | |
| (0.17) | (0.09) | (0.22) | (0.20) | |
| 46.97 | 56.26 | 61.51 | | |
| (0.15) | (0.12) | (0.14) | (0.12) | |
| 46.67 | 55.75 | 60.08 | | |
| (0.13) | (0.08) | (0.16) | (0.11) | |
| 47.15 | 56.60 | 64.99 | | |
| (0.20) | (0.09) | (0.14) | (0.15) | |
| 45.61 | 55.89 | 61.03 | | |
| (0.17) | (0.10) | (0.17) | (0.16) | |
| 48.75 | 60.38 | 77.13 | | |
| (0.15) | (0.06) | (0.10) | (0.16) | |
| 47.04 | 60.15 | 75.36 | | |
| (0.14) | (0.07) | (0.16) | (0.15) | |
| RoBERTa | | | | |
| Control | CLS | CLS-N | LOO | LOO-N |
| 42.52 | 70.03 | 70.97 | | |
| (0.09) | (0.16) | (0.22) | (0.18) | |
| 45.57 | 68.21 | 72.11 | | |
| (0.12) | (0.16) | (0.17) | (0.18) | |
| 46.48 | 74.03 | 73.46 | | |
| (0.12) | (0.14) | (0.20) | (0.15) | |
| 46.32 | 72.81 | 72.70 | | |
| (0.13) | (0.10) | (0.18) | (0.20) | |
| 46.47 | 71.70 | 72.22 | | |
| (0.10) | (0.15) | (0.15) | (0.19) | |
| 46.06 | 71.37 | 72.39 | | |
| (0.11) | (0.13) | (0.17) | (0.20) | |
| 48.35 | 74.84 | 74.45 | | |
| (0.10) | (0.10) | (0.17) | (0.21) | |
| 47.81 | 72.69 | 72.39 | | |
| (0.08) | (0.13) | (0.10) | (0.14) | |
BERT
Control CLS CLS-N LOO LOO-N
None 67.75
(0.10)
70.28
(0.10)
54.59
(0.14)
51.09
(0.13)
RAND 64.08
(0.13)
66.13
(0.12)
51.63
(0.13)
46.74
(0.12)
RWS 56.37
(0.09)
58.56
(0.10)
42.02
(0.13)
39.19
(0.11)
RLM 55.99
(0.10)
58.83
(0.10)
42.11
(0.12)
38.64
(0.12)
RWS+RAND 56.57
(0.09)
58.95
(0.09)
42.72
(0.12)
39.30
(0.11)
RLM+RAND 56.08
(0.12)
59.15
(0.10)
42.92
(0.13)
39.16
(0.12)
RWS+RLM **53.38**
(0.09)
56.72
(0.12)
37.95
(0.13)
34.55
(0.12)
ALL 54.79
(0.10)
57.13
(0.12)
39.20
(0.08)
36.07
(0.11)
RoBERTa
Control CLS CLS-N LOO LOO-N
None 64.86
(0.10)
66.58
(0.17)
64.81
(0.07)
65.95
(0.08)
RAND 56.06
(0.17)
57.79
(0.10)
58.95
(0.09)
60.83
(0.11)
RWS 50.92
(0.09)
53.85
(0.10)
51.17
(0.08)
52.99
(0.08)
RLM 50.23
(0.07)
53.44
(0.09)
51.70
(0.06)
53.25
(0.08)
RWS+RAND 51.21
(0.11)
54.34
(0.09)
52.13
(0.07)
52.20
(0.11)
RLM+RAND 51.07
(0.09)
54.04
(0.10)
52.94
(0.10)
53.70
(0.08)
RWS+RLM **46.97**
(0.10)
49.50
(0.12)
47.45
(0.11)
48.59
(0.10)
ALL 49.36
(0.12)
51.13
(0.09)
50.03
(0.09)
51.16
(0.09)
| BERT | | | | |
|----------|--------|--------|--------|-------|
| Control | CLS | CLS-N | LOO | LOO-N |
| None | 18.22 | 17.45 | 17.88 | 18.56 |
| (0.10) | (0.04) | (0.01) | (0.10) | |
| RAND | 14.13 | 13.21 | 13.29 | 14.24 |
| (0.03) | (0.01) | (0.02) | (0.06) | |
| RWS | 10.82 | 11.50 | 11.31 | 11.36 |
| (0.02) | (0.02) | (0.02) | (0.02) | |
| RLM | 10.83 | 12.29 | 11.29 | 11.86 |
| (0.05) | (0.08) | (0.02) | (0.03) | |
| RWS+RAND | 12.03 | 11.81 | 11.86 | 11.33 |
| (0.10) | (0.01) | (0.05) | (0.05) | |
| RLM+RAND | 12.29 | 12.74 | 11.95 | 12.17 |
| (0.05) | (0.04) | (0.03) | (0.04) | |
| RWS+RLM | 10.22 | 9.65 | 9.51 | 10.16 |
| (0.02) | (0.03) | (0.02) | (0.02) | |
| ALL | 10.57 | 10.13 | 10.15 | 11.19 |
| (0.02) | (0.05) | (0.03) | (0.02) | |
| RoBERTa | | | | |
| Control | CLS | CLS-N | LOO | LOO-N |
| None | 18.60 | 19.67 | 16.52 | 17.02 |
| (0.01) | (0.02) | (0.04) | (0.02) | |
| RAND | 13.31 | 15.04 | 13.36 | 12.33 |
| (0.09) | (0.10) | (0.02) | (0.04) | |
| RWS | 11.09 | 11.26 | 10.81 | 10.03 |
| (0.01) | (0.01) | (0.01) | (0.02) | |
| RLM | 11.62 | 11.30 | 11.03 | 10.40 |
| (0.01) | (0.03) | (0.01) | (0.01) | |
| RWS+RAND | 11.37 | 12.41 | 11.28 | 10.84 |
| (0.04) | (0.08) | (0.02) | (0.02) | |
| RLM+RAND | 12.44 | 13.38 | 11.42 | 11.56 |
| (0.03) | (0.05) | (0.01) | (0.02) | |
| RWS+RLM | 10.85 | 10.23 | 9.37 | 9.53 |
| (0.01) | (0.02) | (0.02) | (0.02) | |
| ALL | 12.11 | 12.85 | 9.66 | 9.83 |
| (0.02) | (0.05) | (0.03) | (0.02) | |
![9_image_0.png](9_image_0.png)
![10_image_0.png](10_image_0.png) ![11_image_0.png](11_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitation.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3.
✓ B1. Did you cite the creators of artifacts you used?
Section 3.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
All the datasets we use are publicly available, and they are cited in Section 3.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. sets we use are publicly available, and they are cited in Section 3.
## C ✓ **Did You Run Computational Experiments?** Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. We conducted probing experiments that do not require training or hyperparameter search.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
arora-park-2023-split | Split-{NER}: Named Entity Recognition via Two Question-Answering-based Classifications | https://aclanthology.org/2023.acl-short.36 | In this work, we address the NER problem by splitting it into two logical sub-tasks: (1) Span Detection which simply extracts entity mention spans irrespective of entity type; (2) Span Classification which classifies the spans into their entity types. Further, we formulate both sub-tasks as question-answering (QA) problems and produce two leaner models which can be optimized separately for each sub-task. Experiments with four cross-domain datasets demonstrate that this two-step approach is both effective and time efficient. Our system, SplitNER outperforms baselines on OntoNotes5.0, WNUT17 and a cybersecurity dataset and gives on-par performance on BioNLP13CG. In all cases, it achieves a significant reduction in training time compared to its QA baseline counterpart. The effectiveness of our system stems from fine-tuning the BERT model twice, separately for span detection and classification. The source code can be found at \url{https://github.com/c3sr/split-ner}. | # Split-Ner: Named Entity Recognition Via Two Question-Answering-Based Classifications
Jatin Arora Nuro Inc.
[email protected]
## Abstract
In this work, we address the NER problem by splitting it into two logical sub-tasks: (1)
Span Detection which simply extracts mention spans of entities, irrespective of entity type;
(2) *Span Classification* which classifies the spans into their entity types. Further, we formulate both sub-tasks as question-answering
(QA) problems and produce two leaner models which can be optimized separately for each sub-task. Experiments with four crossdomain datasets demonstrate that this two-step approach is both effective and time efficient.
Our system, SplitNER outperforms baselines on OntoNotes5.0, *WNUT17* and a cybersecurity dataset and gives on-par performance on BioNLP13CG. In all cases, it achieves a significant reduction in training time compared to its QA baseline counterpart. The effectiveness of our system stems from fine-tuning the BERT
model twice, separately for span detection and classification. The source code can be found at github.com/c3sr/split-ner.
## 1 **Introduction**
Named entity recognition (NER) is a foundational task for a variety of applications like question answering and machine translation (Li et al.,
2020a). Traditionally, NER has been seen as a sequence labeling task where a model is trained to classify each token of a sequence to a predefined class (Carreras et al., 2002, 2003; Chiu and Nichols, 2016; Lample et al., 2016; Ma and Hovy, 2016; Devlin et al., 2019; Wan et al., 2022).
Recently, there has been a new trend of formulating NER as span prediction problem (Stratos, 2017; Li et al., 2020b; Jiang et al., 2020; Ouchi et al.,
2020; Fu et al., 2021), where a model is trained to jointly perform span boundary detection and multiclass classification over the spans. Another trend is to formulate NER as a question answering (QA)
task (Li et al., 2020b), where the model is given a sentence and a query corresponding to each entity Youngja Park IBM T.J. Watson Research Center [email protected] type. The model is trained to understand the query and extracts mentions of the entity type as answers.
While these new frameworks have shown improved results, both approaches suffer from a high computational cost: span-based NER systems consider all possible spans (i.e., n 2(quadratic) spans for a sentence with n tokens) and the QA-based system multiplies each input sequence by the number of entity types resulting in N×T input sequences for N sentences and T entity types.
In this work, we borrow the effectiveness of span-based and QA-based techniques and make it more efficient by breaking (splitting up) the NER
task into a two-step pipeline of classification tasks.
In essence, our overall approach comes under the span-based NER paradigm, and each sub-task is formulated as a QA task inspired by the higher accuracy offered by the QA framework. The first step, *Span Detection* performs token-level classification to extract mention spans from text, irrespective of entity type and the second step, *Span* Classification classifies the extracted spans into their corresponding entity type, thus completing the NER task. Unlike other span-based NER techniques which are quadratic in terms of sequence length, our *Span Detection* process is linear. Compared to other QA-based techniques which query for all entity types in each sentence, our *Span Classification* queries each sentence only once for each entity mention in the sentence. This makes it highly efficient for datasets with large number of entity types like *OntoNotes5.0*.
## 2 **Method**
Figure 1 illustrates how our two-step SplitNER
system works. *Span Detection Model* is entityagnostic and identifies all mention spans irrespective of entity type. The extracted spans are passed to *Span Classification Model* which reanalyses them in the sentence structure and classifies them into an entity type. Both models use BERT-
416
![1_image_0.png](1_image_0.png)
base as their underlying architecture and are designed as QA tasks. Hence, moving forward, we may sometimes explicitly call our system as SplitNER(QA-QA) to distinguish it from other variants we experiment with.
## 2.1 **Span Detection**
Given a sentence S as a n-length sequence of tokens, S = ⟨w1, w2 *. . . w*n⟩, the goal is to output a list of spans ⟨*s, e*⟩, where s, e ∈ [1, n] are *start* and end indices of a mention. We formulate this as a QA task classifying each token using BIOE
scheme1. Since the goal is to detect spans irrespective of their entity type, we use a generic question,
"Extract important entity spans from the following text", prefixed with input sentence (see Figure 1)
2.
A well-known problem in pipeline systems is error propagation. Inaccurate mention boundaries will lead to incorrect entity type classification. We observed that such boundary detection errors happen mostly for domain-specific terms which occur rarely and do not have a good semantic representation in the underlying BERT model. However, these domain specific terms often share patterns at character-level (e.g., chemical formulas). Thus we add character sequences and intrinsic orthographic patterns as additional features along with the BERT embeddings. The character and pattern features are shown to produce better word representations (Carreras et al., 2002; Limsopatham and Collier, 2016; Boukkouri et al., 2020; Lange et al., 2021).
Character Sequence Feature To learn characterlevel representation of each token, we use five onedimensional CNNs with kernel sizes from 1 to 5, each having 16 filters and 50 input channels. Each token output from WordPiece Tokenizer is fed to the five CNN models simultaneously, which produce a 50-dimensional embedding for each character. These are max-pooled and the outputs from the CNNs are concatenated and passed through a linear layer with ReLU activation to get a 768dimensional character-level representation of the token. Figure 2a shows the process.
Orthographic Pattern Feature To capture the intrinsic orthographic patterns (or word shapes) of entity mentions at the sub-word level, we map all uppercase tokens to a single character, U, all lowercase tokens to L, all digit tokens to D. If a token contains a mix of uppercase, lowercase and digits, we map each lowercase character to l, uppercase to u and digit to d. Special characters are retained and BERT's special tokens, "[CLS]" and "[SEP]",
are mapped to C and S respectively.
We use 3 CNNs with the same setup as character sequence with kernel sizes of 1 to 3. Note that a contextual learning layer is needed to capture patterns in mentions spanning multiple tokens.
Thus, we pass the pattern-level embeddings for all tokens to a bidirectional LSTM with 256 hidden dimensions as shown in Figure 2b. Finally, the character and pattern features are concatenated with the BERT output for the token and fed to a final classifier layer as shown in Figure 3. 2.2 **Span Classification**
Given a sentence S = ⟨w1, w2 *. . . w*n⟩ and a span ⟨*s, e*⟩, this step determines the entity type for the span. Existing QA-based NER methods take the target entity type as the question (e.g., "Where is person?) and return the corresponding mentions in the sentence. On the contrary, our model takes a mention as the question (e.g., *"What is* Emily?)
and outputs its entity type.
During training, we create a training sample for
(a) Character Sequence Feature Learning (b) Orthographic Feature Learning
![2_image_1.png](2_image_1.png)
![2_image_2.png](2_image_2.png)
entity types with non-word mentions (e.g., chemical formulas) and very long mentions (e.g., URLs).
![2_image_0.png](2_image_0.png)
| Dataset | Type | Density | Train | Dev | Test |
|--------------|--------|-----------|---------|--------|--------|
| BioNLP13CG | 16 | 3.59 | 3, 033 | 1, 003 | 1, 906 |
| CTIReports | 8 | 0.63 | 38, 721 | 6, 322 | 9, 837 |
| OntoNotes5.0 | 18 | 1.36 | 59, 924 | 8, 528 | 8, 262 |
| WNUT17 | 6 | 0.68 | 3, 394 | 1, 009 | 1, 287 |
each labeled entity mention in a sentence. During inference, the model gets the mention spans from Span Detection Model as its input. An input sample is created by appending the mention span text as
"What is [mention]?" to the input sentence (see top diagrams in Figure 1 for example). This is fed to a BERT model and the pooled sequence embedding is fed to a fully connected layer and converted into a probability distribution over the entity types.
## 3 **Experimental Results**
We demonstrate the effectiveness of our method in terms of performance and latency.
## 3.1 **Datasets**
Table 1 shows our datasets, including three public benchmark datasets, *BioNLP13CG* (Pyysalo et al., 2015), *OntoNotes5.0* (Weischedel et al.,
2013), and *WNUT17* (Derczynski et al., 2017), and a private dataset3(*CTIReports*) from the cybersecurity domain which contains news articles and technical reports related to malware and security threats. These datasets cover not only the traditional whole-word entities like PERSON but also
## 3.2 **Experimental Setup**
We implement our baselines and our proposed system, SplitNER in pytorch using transformers (Wolf et al., 2019). All models are trained on *Nvidia Tesla V100* GPUs and use BERT-base architecture. We use pretrained RoBERTa-*base* (Liu et al., 2019) backbone for all experiments with *OntoNotes5.0* corpus following Ye et al. (2022); Zhu and Li (2022) and use SciBERT-*scivocab-uncased* (Beltagy et al., 2019)
for *BioNLP13CG* since this dataset has chemical formulas and scientific entities4. For *WNUT17*5 and *CTIReports*, we use BERT-*base-uncased* (Devlin et al., 2019). Note that our model is a general two-step NER framework which has the performance benefits of QA-based and span-based approaches with efficiency. It can work with any BERT-based pretrained backbones.
The training data is randomly shuffled, and a batch size of 16 is used with post-padding.
The maximum sequence length is set to 512 for
| Model | BioNLP13CG | CTIReports | OntoNotes5.0 | WNUT17 |
|-------------------------------|--------------|--------------|----------------|----------|
| SplitNER(QA-QA) | 86.75 | 74.96 | 90.86 | 57.25 |
| SplitNER(QANoCharP attern-QA) | 86.70 | 74.05 | 90.58 | 56.24 |
| SplitNER(SeqTag-QA) | 86.08 | 73.84 | 90.30 | 56.10 |
| Single(QA) | 86.68 | 71.70 | 89.02 | 43.45 |
| Single(SeqTag) | 87.08 | 72.36 | 88.64 | 44.97 |
Table 2: NER Performance Comparison (mention-level F1). SplitNER(QA-QA) is our proposed method.
| Model | BioNLP13CG | CTIReports | OntoNotes5.0 | WNUT17 | | | | |
|-----------------|--------------|--------------|----------------|----------|----------|---------|-------|------|
| Train | Inf. | Train | Inf. | Train | Inf. | Train | Inf. | |
| SplitNER(QA-QA) | 241.2 | 57.7 | 1,455.7 | 120.0 | 3,007.8 | 183.0 | 122.9 | 26.0 |
| Single(QA) | 1,372.8 | 323.3 | 8,771.0 | 551.6 | 73,818.4 | 2,227.8 | 568.2 | 91.2 |
| Single(SeqTag) | 102.2 | 25.2 | 6,425.9 | 86.4 | 9,181.1 | 105.0 | 101.3 | 18.6 |
Table 3: Comparison of training and inference (Inf.) latency in seconds.
OntoNotes5.06and to 256 for all other datasets.
For model optimization, we use cross entropy loss for span detection and dice loss(Li et al., 2020c) for span classification. All other training parameters are set to defaults in transformers.
## 3.3 **Performance Evaluation**
We compare our method SplitNER(QA-QA)
with the following baselines and variants. (1)
Single(SeqTag): The standard single-model sequence tagging NER setup which classifies each token using BIOE scheme. (2) Single(QA): The standard single-model QA-based setup which prefixes input sentences with a question describing the target entity type (e.g., Where is the person *mentioned in the text?*); (3)
SplitNER(SeqTag-QA): A variant of our model which uses sequence tagging for span detection with our QA-based *Span Classification Model*;
(4) SplitNER(QA*N oCharP attern*-QA): This model is the same as our method but without the additional character and pattern features. All other baselines use character and pattern features for fair comparison. We trained all models with 5 random seeds and report the mean mention-level Micro-F1 score in Table 2. As can be seen, SplitNER(QA-QA) outperforms all baselines on three cross-domain datasets and gives comparable results on *BioNLP13CG*. We present further ablation studies on individual components of our system in Appendix A and a qualitative study in Appendix B.
## 3.4 **Latency Evaluation**
We compare the latency of our method, SplitNER(QA-QA) and the two single-model NER
methods. Table 3 shows the training and inference times. Training time is measured for one epoch and averaged over 10 runs. For a fair comparison, we report the training latency for our system as the sum of span detection and classification even though they can be trained in parallel.
The results show that, compared to Single(QA),
our method is 5 to 25 times faster for training and about 5 times faster for inference, and it is especially beneficial for large datasets with many entity types. Compared to Single(SeqTag), our method is slightly slower but achieves much better F1 scores (Table 2). These results validate SplitNER(QA-QA)'s effectiveness in achieving the balance between performance and time efficiency.
## 4 **Related Work**
In recent years, deep learning has been increasingly applied for NER (Torfi et al., 2020; Li et al.,
2020a), a popular architecture being CNN-LSTMCRF (Ma and Hovy, 2016; Xu et al., 2021) and BERT (Devlin et al., 2019). Li et al. (2020b,c) propose a QA-based setup for NER using one model for both span detection and classification. Li et al.
(2020b); Jiang et al. (2020); Ouchi et al. (2020); Fu et al. (2021); Zhu and Li (2022) perform NER as a span prediction task. However, they enumerate all possible spans in a sentence leading to quadratic complexity w.r.t. sentence length. Our model does a token-level classification and hence is linear.
Xu et al. (2021) propose a Syn-LSTM setup leveraging dependency tree structure with pretrained BERT embeddings for NER. Yan et al.
(2021) propose a generative framework leveraging BART (Lewis et al., 2020) for NER. Yu et al.
(2020) propose a biaffine model utilizing pretrained BERT and FastText (Bojanowski et al., 2017) embeddings along with character-level CNN setup over a Bi-LSTM architecture. All of these models report good performance on *OntoNotes5.0*, however, using BERT-*large* architecture. Nguyen and Vu (2020) propose the BERTweet model by training BERT on a corpus of English tweets and report good performance on *WNUT17*. Wang et al.
(2021) leverage external knowledge and a cooperative learning setup. On *BioNLP13CG*, Crichton et al. (2017) report 78.90 F1 in a multi-task learning setup and Neumann et al. (2019) report 77.60 using the SciSpacy system. SplitNER(QA-QA) outperforms both of these by a large margin.
## 5 **Conclusion**
Using the QA-framework for both span detection and span classification, we show that this division of labor is not only effective but also significantly efficient through experiments on multiple crossdomain datasets. Through this work, we open up the possibility of breaking down other complex NLP tasks into smaller sub-tasks and fine-tuning large pretrained language models for each task.
## Limitations
Our proposed approach requires to train two independent classification models. While the models can be trained in parallel, this requires larger GPU memory. For the experiments, we trained two BERT-base models, which have around 220M trainable parameters when trained in parallel. This requires almost twice the GPU memory compared to a single BERT-base NER model, having around 110M trainable parameters.
Owing to a pipeline-based structure, the overall performance of our system is upper bounded by the performance of *Span Detection Model* which has lots of potential for improvement. On dev set, we find that around 30% of errors for *OntoNotes5.0* and *BioNLP13CG*, and around 22% errors on WNUT17 are just due to minor boundary detection issues. Their entity types are being detected correctly. We henceforth encourage the research community to design architectures or new training objectives to detect mention boundaries more effectively. Currently, in our *Span Detection Model*,
all entity mentions are grouped into a single class.
As a potential future work, we expect to get even better performance by a hierarchical extension of our setup. At the top level, we can detect mentions belonging to some crude categories and gradually break them down into more fine-grained categories.
## References
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert:
A pretrained language model for scientific text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Association for Computational Linguistics.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146.
Hicham Boukkouri, Olivier Ferret, Thomas Lavergne, Hiroshi Noji, and Pierre Zweigenbaum. 2020. Characterbert: Reconciling elmo and bert for word-level open-vocabulary representations from characters. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6903–6915.
Xavier Carreras, Lluís Màrquez, and Lluís Padró.
2002. Named entity extraction using AdaBoost. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
Xavier Carreras, Lluís Màrquez, and Lluís Padró. 2003.
Learning a perceptron-based named entity chunker via online recognition feedback. In *Proceedings of* the Seventh Conference on Natural Language Learning at HLT-NAACL 2003.
Jason P. C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Trans.
Assoc. Comput. Linguistics, 4:357–370.
Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. 2017. A neural network multi-task learning approach to biomedical named entity recognition.
BMC bioinformatics, 18(1):1–14.
Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the wnut2017 shared task on novel and emerging entity recognition.
In *Proceedings of the 3rd Workshop on Noisy Usergenerated Text*, pages 140–147.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 4171–4186.
Jinlan Fu, Xuanjing Huang, and Pengfei Liu. 2021.
Spanner: Named entity re-/recognition as span prediction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP, pages 7183–7195.
Zhengbao Jiang, Wei Xu, Jun Araki, and Graham Neubig. 2020. Generalizing natural language analysis through span-relation representations. In *Proceedings of the 58th Annual Meeting of the Association for* Computational Linguistics, ACL, pages 2120–2133.
Association for Computational Linguistics.
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016.
Neural architectures for named entity recognition. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. The Association for Computational Linguistics.
Lukas Lange, Heike Adel, Jannik Strötgen, and Dietrich Klakow. 2021. FAME: feature-based adversarial meta-embeddings for robust input representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP,
pages 8382–8395.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.
2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining.
Bioinformatics, 36(4):1234–1240.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7871–7880.
Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li.
2020a. A survey on deep learning for named entity recognition. *IEEE Transactions on Knowledge and* Data Engineering.
Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020b. A unified MRC
framework for named entity recognition. In *Proceedings of the 58th Annual Meeting of the Association for* Computational Linguistics, ACL, pages 5849–5859.
Association for Computational Linguistics.
Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, and Jiwei Li. 2020c. Dice loss for dataimbalanced NLP tasks. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, ACL, pages 465–476. Association for Computational Linguistics.
Nut Limsopatham and Nigel Collier. 2016. Bidirectional LSTM for named entity recognition in Twitter messages. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 145–152.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In *Proceedings of the 54th Annual Meeting of the* Association for Computational Linguistics ACL.
Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing. In *Proceedings of the 18th BioNLP Workshop and Shared Task*,
pages 319–327.
Dat Quoc Nguyen and Thanh Vu. 2020. Bertweet: A
pre-trained language model for english tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 9–14.
Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Ryuto Konno, and Kentaro Inui. 2020. Instance-based learning of span representations: A case study through named entity recognition. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,*
ACL, pages 6452–6459.
Sampo Pyysalo, Tomoko Ohta, Rafal Rak, Andrew Rowley, Hong-Woo Chun, Sung-Jae Jung, Sung-Pil Choi, Jun'ichi Tsujii, and Sophia Ananiadou. 2015.
Overview of the cancer genetics and pathway curation tasks of bionlp shared task 2013. *BMC bioinformatics*, 16(10):1–19.
Karl Stratos. 2017. Entity identification as multitasking. In *Proceedings of the 2nd Workshop on Structured Prediction for Natural Language Processing,*
SPNLP@EMNLP, pages 7–11.
Amirsina Torfi, Rouzbeh A Shirvani, Yaser Keneshloo, Nader Tavvaf, and Edward A Fox. 2020. Natural language processing advancements by deep learning:
A survey. *arXiv preprint arXiv:2003.01200*.
Juncheng Wan, Dongyu Ru, Weinan Zhang, and Yong Yu. 2022. Nested named entity recognition with span-level graphs. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), ACL, pages 892–903. Association for Computational Linguistics.
Xinyu Wang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, and Kewei Tu. 2021. Improving named entity recognition by external context retrieving and cooperative learning. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint
Conference on Natural Language Processing, pages 1800–1812.
Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19.
Linguistic Data Consortium, Philadelphia, PA, 23.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing.
Lu Xu, Zhanming Jie, Wei Lu, and Lidong Bing. 2021.
Better feature integration for named entity recognition. In *Proceedings of the 2021 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3457–3469.
Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various ner subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 5808–5822.
Deming Ye, Yankai Lin, Peng Li, and Maosong Sun.
2022. Packed levitated marker for entity and relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 4904–4917.
Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020.
Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470–
6476.
Enwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. *arXiv preprint* arXiv:2204.12031.
## A **Performance Ablations**
Here, we study the individual components of our system, SplitNER(QA-QA) in detail. First, we investigate the effectiveness of the additional character and pattern features for span detection. As we can see from Table 4, the character and pattern features improve the NER performance for all datasets.
We also study the effect of the character and pattern features separately. Table 5 shows this ablation study on the *BioNLP13CG* dataset. As we can see, adding the character feature or the pattern feature alone makes a small change in the performance. Interestingly, the character feature helps with recall, while the pattern features improves precision, and, thus, adding them together improves both precision and recall. However, adding part-of-speech (POS)
in addition to the character and pattern features shows little impact on the performance.
Next, we compare dice loss and cross-entropy loss for their effectiveness in handling the class imbalance issue in span classification. As shown in Table 6, dice loss works better for imbalanced data confirming the results found in Li et al. (2020c).
Finally, we experimented with different question sentences in *Span Detection Model* to check if BERT is giving any importance to the query part.
As shown in Table 7, different queries do have a minor impact but as expected, the model mostly learns not to focus on the query part as can be seen by the comparable results with *<empty>* query.
A.1 **Discussions**
From the results of the experiments described in Section 3 together with the ablation studies, we make the following observations:
- As shown in Table 2, SplitNER(QA-QA)
outperforms both the sequence tagging and QA-based baselines on three crossdomain datasets and performs on-par on BioNLP13CG.
- The division of labor allows each model to be optimized for its own sub-task. Adding character and pattern features improves the accuracy of *Span Detection Model* (Table 4).
However, adding these same features in Span Classification Model was found to deteriorate the performance. Similarly, dice loss improves the performance for *Span Classification Model* (Table 6), but no such impact was observed for *Span Detection Model*.
- Span detection using the QA setting is slightly more effective than the sequence tagging setup as done in SplitNER(SeqTag-QA) (Table 2).
- Our model has more representative power than the baseline approaches, because it leverages two BERT models, each working on their own sub-tasks.
- It also leverages the QA framework much more efficiently than the standard singlemodel QA system (Table 3). The margin of improvement is more pronounced when the data size and number of entity types increase.
| Span Detection | BioNLP13CG | CTIReports | OntoNotes5.0 | WNUT17 | | | | | | | | |
|------------------|--------------|--------------|----------------|----------|-------|-------|-------|-------|-------|-------|-------|-------|
| Features | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 |
| +CharPattern | 91.43 | 90.70 | 91.06 | 80.59 | 77.21 | 78.86 | 92.17 | 92.83 | 92.50 | 73.38 | 44.25 | 55.21 |
| -CharPattern | 90.31 | 91.03 | 90.67 | 79.65 | 77.77 | 78.70 | 91.96 | 92.79 | 92.37 | 72.63 | 44.06 | 54.85 |
Table 4: *Span Detection Model* performance with and without character and pattern features.
| Features | P | R | F1 |
|-------------------|-------|-------|-------|
| Base Model | 90.31 | 91.03 | 90.67 |
| +Char | 89.85 | 91.45 | 90.64 |
| +Pattern | 91.29 | 90.22 | 90.75 |
| +Char+Pattern | 91.43 | 90.70 | 91.06 |
| +Char+Pattern+POS | 91.14 | 90.64 | 90.89 |
Table 5: *Span Detection Model* performance for BioNLP13CG with different feature sets. Base Model does not use character and pattern features.
| Dataset | Dice Loss | Cross Entropy Loss |
|--------------|-------------|----------------------|
| BioNLP13CG | 94.27 | 94.04 |
| CyberThreats | 87.84 | 87.58 |
| OntoNotes5.0 | 96.74 | 96.50 |
| WNUT17 | 73.40 | 73.31 |
Table 6: Span classification performance comparison
| Question Type | F1 |
|---------------------------------------------------------|-------|
| Extract important entity spans from the following text. | 90.67 |
| Where is the entity mentioned in the text? | 90.38 |
| Find named entities in the following text. | 90.32 |
| <empty> | 90.48 |
Table 7: *Span Detection Model* performance for BioNLP13CG using different questions. <empty> denotes an empty question sentence. All experiments were done using Base Model in Table 5.
- The training time for our model in Table 3 considers *Span Detection Model* and *Span* Classification Model being trained sequentially. However, the two components can be trained in parallel, reducing the overall train time significantly. The sequential execution is necessary only at inference time.
- *WNUT17* has a diverse range of rare and emerging entities crudely categorized into 6 entity types. A single-model NER system may get confused and try to learn sub-optimal entity-specific extraction rules. Our task segregation allows *Span Detection Model* to form generalized extraction rules which is found to be more effective as shown in Table 2.
- As a sidenote, all the models built in this work outperform the previously published ap-
proaches on *BioNLP13CG* (Table 2), thus setting new state-of-the-art results. The credit goes to the SciBERT model and the additional character and pattern features.
## B **Qualitative Analysis**
Table 8 shows some sample predictions by our method, SplitNER(QA-QA) and compares them with our single-model NER baseline, Single(QA).
From the results, we observe that:
- SplitNER(QA-QA) is better in detecting emerging entities and out-of-vocabulary
(OOV) terms (e.g., movie titles and softwares). This can be attributed to *Span Detection Model* being stronger in generalizing and sharing entity extraction rules across multiple entity types.
- Single(QA) gets confused when entities have special symbols within them (e.g., hyphens and commas). Our character and orthographic pattern features help handle such cases well.
- Single(QA) model develops a bias towards more common entity types (e.g., PERSON) and misclassifies rare entity mentions when they occur in a similar context.
SplitNER(QA-QA) handles such cases well thanks to the dedicated *Span Classification* Model using dice loss.
## C Ctireports **Dataset**
The *CTIReports* dataset is curated from a collection of 967 documents which include cybersecurity news articles and white papers published online by reputable companies and domain knowledge experts. These documents usually provide deep analysis on a certain malware, a hacking group or a newly discovered vulnerability (like a bug in software that can be exploited). The documents were published between 2016 and 2018. We split the dataset into the train, development, and test sets as shown in Table 9.
A team of cybersecurity domain experts labeled the dataset for the following 8 entity types. These
| Category | Example Sentence |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------|
| General | CVS selling their own version of ... |
| Detection CVS selling their own version of ... Emerging Rogue One create a plot hole in Return of the Jedi Entities Rogue One create a plot hole in Return of the Jedi Scientific Treating EU - 6 with anti-survivin antisense ... Terms Treating EU - 6 with anti-survivin antisense ... Boundary Hotel Housekeepers Needed in Spring , TX ... Hotel Housekeepers Needed in Spring , TX ... OOV Store SQL database credentials in a webserver Terms Store SQL database credentials in a webserver Entity Why do so many kids in Digimon wear gloves? Type Why do so many kids in Digimon wear gloves? | |
Table 8: Qualitative comparison of SplitNER(QA-QA)
and Single(QA) systems. For each category, the first line shows the result of Single(QA), and the second line shows the result of SplitNER(QA-QA). The words in italics are the entity mentions extracted by the systems color-coded as ORG, CREATIVE WORK, GENE,
LOCATION and PRODUCT.
types were selected based on the STIX (Structured Threat Information Expression) schema which is used to exchange cyber threat intelligence. For more detailed information about the 8 types, please refer the STIX documentation7.
- CAMPAIGN: Names of cyber campaigns that describe a set of malicious activities or attacks over a period of time.
- COURSE OF ACTION: Tools or actions to take in response to cyber attacks.
- EXPLOIT TARGET: Vulnerabilities that are targeted for exploitation.
- IDENTITY: Individuals, groups or organizations.
- INDICATOR: Objects that are used to detect suspicious or malicious cyber activity such as domain name, IP address and file names.
- MALWARE: Names of malicious codes used in cyber crimes.
- RESOURCE: Tools that are used in cyber attacks.
- THREAT ACTOR: Individuals or groups that commit cyber crimes.
Table 10 and Table 11 show the statistics of the entity types in the corpus and some sample mentions of these types respectively.
| Train | Test | Dev | Total | |
|-------------|---------|---------|---------|---------|
| # documents | 667 | 133 | 167 | 967 |
| # sentences | 38,721 | 9,837 | 6,322 | 54,880 |
| # tokens | 465,826 | 119,613 | 92,788 | 678,227 |
Table 9: Summary of the *CTIReports* corpus showing the number of documents, sentences and tokens in each dataset.
| Entity Type | Train | Dev | Test |
|------------------|---------|-------|--------|
| CAMPAIGN | 247 | 27 | 85 |
| COURSE OF ACTION | 1,938 | 779 | 329 |
| EXPLOIT TARGET | 5,839 | 1,412 | 1,282 |
| IDENTITY | 6,175 | 1,262 | 1,692 |
| INDICATOR | 3,718 | 1,071 | 886 |
| MALWARE | 4,252 | 776 | 1,027 |
| RESOURCE | 438 | 91 | 114 |
| THREAT ACTOR | 755 | 91 | 144 |
Table 10: The number of mentions for each entity type in the train, development and test sets
| Entity Type | Examples |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| "Operation Pawn Storm", "The Mask" | |
| CAMPAIGN | "MiniDuke", "Woolen-Goldfish" "Ke3chang" "Trojan.Poweliks Removal Tool" "HPSBHF03535", "TDSSKiller" "cd chktrust -i FixTool.exe" "http://www.ubuntu.com/usn/usn-2428-1" "Initial Rapid Release version June 15, 2015 revision 02" |
| COURSE OF ACTION | "CVE-2015-8431", "Adobe Flash Player" |
| EXPLOIT TARGET | "Ubuntu", "Windows", "CGI.pm" "version 20.0.0.306 and earlier" |
| IDENTITY | "Symantec", "Jon DiMaggio", "Belgium" "Kaspersky Lab", "RSA" "C:\WINDOWS\assembly\GAC_MSIL" |
| INDICATOR "hxxp://deschatz-army.net", "67.23.112.226" "b4b483eb0d25fa3a9ec589eb11467ab8" "ChewBacca", "SONAR.AM.E.J!g13" "Trojan.Poweliks", "BlackHole" MALWARE "TDL3", "LockyZeus" "JS/TrojanDownloader.Nemucod" "IRC", "Tor", "DroidPlugin", "Onion" RESOURCE "PowerShell", "Google Play" "Free Pascal 2.7.1.", "Teamviewer" "ProjectSauron", "Strider", "Ogundokun" THREAT "APT28", "APT 28", "Fancy Bear" ACTOR "Pro_Mast3r", "Equation Group" | |
Table 11: Sample entity mentions for each type in the CTIReports corpus
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations Section A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 2, 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 2, 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2, 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
peskoff-stewart-2023-credible | Credible without Credit: Domain Experts Assess Generative Language Models | https://aclanthology.org/2023.acl-short.37 | Language models have recently broken into the public consciousness with the release of the wildly popular ChatGPT. Commentators have argued that language models could replace search engines, make college essays obsolete, or even write academic research papers. All of these tasks rely on accuracy of specialized information which can be difficult to assess for non-experts. Using 10 domain experts across science and culture, we provide an initial assessment of the coherence, conciseness, accuracy, and sourcing of two language models across 100 expert-written questions. While we find the results are consistently cohesive and concise, we find that they are mixed in their accuracy. These results raise questions of the role language models should play in general-purpose and expert knowledge seeking. | # Credible Without Credit: Domain Experts Assess Generative Language Models
Denis Peskoff Princeton University Office of Population Research [email protected]
## Abstract
Language models have recently broken into the public consciousness with the release of the wildly popular ChatGPT. Commentators have argued that language models could replace search engines, make college essays obsolete, or even write academic research papers.
All of these tasks rely on accuracy of specialized information which can be difficult to assess for non-experts. Using 10 domain experts across science and culture, we provide an initial assessment of the coherence, conciseness, accuracy, and sourcing of two language models across 100 expert-written questions. While we find the results are consistently cohesive and concise, we find that they are mixed in their accuracy. These results raise questions of the role language models should play in generalpurpose and expert knowledge seeking.
## 1 Do Experts Agree With Chatgpt?
Since its release in late November 2022, ChatGPT has gained over 100 million users in just two months and been the subject of breathless coverage news coverage which claims it threatens to "replace search engines" (Loten, 2022; Grant and Metz, 2022), kill the college essay (Marche, 2022), and automate the writing of scientific research (StokelWalker, 2023). These tasks are distinct from the kind usually evaluated in NLP because they all rely on expert-level knowledge. In this paper, we survey 10 experts to obtain subjective assessments of how two recent language models engage with questions in diverse domains.
Our efforts build on prior work to evaluate the capabilities of language models. Language models are now regularly subjected to extensive benchmarks which cover a variety of standard NLP tasks (Wang et al., 2019; Brown et al., 2020; Ribeiro et al., 2020; Srivastava et al., 2022). Recent efforts engage in domain-specific tasks such as taking the bar or medical licensing exams (Katz Brandon M. Stewart
![0_image_0.png](0_image_0.png)
Princeton University Sociology and Office of Population Research [email protected] et al., 2023; Kung et al., 2023) and making political arguments (Palmer and Spirling, 2023). Liu et al.
(2023), released on arXiv while this paper was under review, evaluates the ability of generative search engines to answer a range of general knowledge queries. We complement these efforts by having experts craft their own information-seeking questions and evaluate the generated responses.
In the next section, we briefly discuss the role of expertise in language models and our goals in evaluating it. We then describe our methodology 427
(Section 3). We find ChatGPT and YouChat to be cohesive and coherent (Section 4.1), generally accurate with some misses (Section 4.2), and ubiquitously lacking in sources (Section 4.3). A majority of our experts recommend these models for general purpose questions but not for professional settings
(Section 4.4). We conclude with implications and contrast with the contemporaneous findings in Liu et al. (2023) (Section 5).
## 2 "Expertise" In Language Models
Individuals and companies are increasingly looking to language models as a source of expert question answering. For example, Metzler et al. (2021) lays out a vision for search that involves language models providing answers to user-generated questions.
Unfortunately, the challenge for many language models is that they are trained to generate language, not to have correct answers. As Shah and Bender write, "to the extent that [language models] sometimes get the right answer to. . . questions [it] is only because they happen to synthesize relevant strings out of what was in their training data. No reasoning is involved" (Shah and Bender, 2022, pg.
222). This has led Narayanan and Kapoor (2022)
to characterize ChatGPT as a "bullshit generator"— plausible, but not accurate (Frankfurt, 2005). While language models might incidentally produce accurate answers to simple and uncontested queries
(e.g., "what is the capital of France?"), we might be understandably skeptical that it will produce correct answers to more nuanced questions. Generated language reflects its training data and—to the extent the training data is publicly known—it is more reflective of the web than expert speech (Bender et al., 2021). By using experts evaluating material in their domain of choice, we provide an initial assessment of expertise provided by these models.
Ultimately what constitutes sufficient accuracy for broader use depends on the use case.
## 3 Methodology
We evaluate two recently-released language models: OpenAI's ChatGPT and You.com's YouChat
(Google's Bard and many other options weren't released at the time of initial submission). OpenAI's ChatGPT is the wildly popular evolution of the GPT-3 model (Brown et al., 2020) and YouChat is built specifically for search. Both systems have a free and public option (at the time of writing)
which makes them generally accessible.
We survey 10 experts across a range of arbitrarily-chosen disciplines from quantum information to ballroom dance (see a complete list in the appendix). We recruited experts from our personal networks aiming to cover a wide-range of different types of knowledge (with the understanding we cannot be exhaustive or representative). The majority hold a doctorate or medical degree.
We asked each expert to fill out an online survey with their own description of their area of expertise, two Wikipedia pages pertinent to it, and five common questions and five niche questions from their domain (see Table 1 for examples).1In a second wave of the survey, we provide answers generated from these questions using ChatGPT and YouChat and ask them to rank the answers on a 5-point Likert-type item for coherence, conciseness, accuracy, sourcing, and quality of content relative to Wikipedia (Likert, 1932). We ask for open-ended feedback on answers and alternate which system the experts evaluate first. Questions are designed to allow experts to focus on their own area of expertise while providing an opportunity to distinguish between different levels of knowledge-specialization.
The survey took one hour on average. Six experts were surveyed in January and four in May of 2023.
The survey design elicits subjective expert judgment of system performance. We evaluate coherence, conciseness, and accuracy as important properties in information-seeking (Cambazoglu et al.,
2021). Comparing assessments to Wikipedia provides a difficult-to-beat baseline with which many people are already familiar. We also ask whether the language model provides a source for its information. Evaluating the source of the information in the response is important not only for the purposes of giving credit, but also as a mechanism for accountability (Bender et al., 2021; Liu et al., 2023).
After all the questions, we directly ask whether the expert would recommend the tool for general purpose and professional use, and if the style of the content is obviously automated (Dou et al., 2022).
We make our data (including the full context for all quotes we use here) publicly available to help support future work.2
| Area | Example Common Question | Example Niche Question |
|---------------------|------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
| Family | Does everyone get cancer cells | Are there some parts of cognitive function |
| Medicine | in their body? | that improve with age after age 40? |
| Radiology | Can imaging studies help diagnose traumatic injuries? | What is better to detect and diagnose necrotizing enterocolitis in infants, x rays or ultrasound? |
| Toxicology | What are the most common laboratory findings in a patient with methanol poisoning? | In what scenario should methylene blue be administered for methemoglobinemia? |
| Geoarchaelogy | What formed parallel and perpendicular straight water channels in Mesoamerica? | Could geological faulting create parallel channels (streams or canals) that cross at right angles? |
| Neuroscience | How do neurons communicate | Are DREADDs a "good" method for manipulating neural activity? |
| in the brain? | | |
| Quantum | What speedups can quantum | |
| Information | computers achieve over classical computers? | Are superconducting qubits the best platforms for realizing quantum computers when compared to trapped ion or semiconductor or neutral-atom qubits? |
| Linguistics | Are there drawbacks to being | Is any aspect of syntax universal across all |
| bilingual? | languages? | |
| Digital | Which was the first novel? | What is the fate of formalism today in literary criticism? |
| Humanities Ballroom | What are some of the most popular ballroom or Latin American partner dances? | Should a competitive dancer seek the opinions of a wide variety of dance judges? |
| Opera | Which is the best version of | Why did Rossini stop composing operas at |
| "Tannhäuser"? | age 37? | |
Table 1: A common and niche question, selected from one of 10, written by the experts from our 10 domains.
## 4 Results Across Disciplines
We discuss two running examples: one about Maria Callas, a famous New York-born opera singer with an international career often associated with Greece, and one about quantum information. Our opera expert asks, "Was Maria Callas American?".
ChatGPT answers *"No, Maria Callas was not* American. Maria Callas, whose full name was Maria Anna Cecilia Sofia Kalogeropoulos, was born on December 2, 1923, in New York City, United States. However, she was of Greek descent. Her parents were Greek immigrants, and she later acquired Greek citizenship. Maria Callas is renowned as one of the greatest opera singers of the 20th century and was particularly known for her performances in the bel canto repertoire."
As a second example, a quantum expert asks
"Can quantum computers be realized in the physical world?". YouChat answers, "Yes, quantum computers can be realized in the physical world.
Researchers have been able to build prototype systems that demonstrate the principles of quantum computing, and there are a number of research groups and companies working on building more advanced versions. Additionally, several companies have announced plans to build full-scale quantum computers in the next few years."
## 4.1 Answers Are Credible
Answers are judged as coherent (avg 4.5), and concise (avg 4.2) by our experts. Responses generally restate the question, provide relevant information, are grammatically correct, and are formal in tone.
The responses were quite stable on regeneration.
Although results are relatively concise, they do differ in length. ChatGPT's answer to the question about Maria Callas is four sentences including a final sentence about her career that is completely unrelated (YouChat's is 3). For the question on quantum information we gave above, ChatGPT provided a three paragraph answer which our expert described as ""a well constructed and nuanced answer that synthesizes information from multiple perspectives"" while YouChat used three sentences.
## 4.2 Uneven Accuracy
While responses are fairly uniform in coherence, they are uneven in terms of accuracy (with 111 of the 200 responses marked as one of the two most accurate categories and 38 marked in the two lowest accuracy categories). Surprisingly, niche questions were only slightly less accurate than common ones
(-.16). Examining the comments suggest that the rankings reflect fairly different standards for what counts as accurate (expert ratings are included in parentheses below where a 1 is completely inaccurate and a 5 is completely accurate). We urge caution in interpreting the averages.
On the question about Maria Callas, ChatGPT
asserts "No" while clarifying that she was born in New York (1) while YouChat answers "Yes" (2).
Both comment on her additional Greek citizenship.
Our expert gave both quantum information answers top marks for accuracy (ChatGPT:5, YouChat:4).
Seven experts gave at least one answer the lowest accuracy score suggesting it is completely wrong. For example in a toxicology answer, ChatGPT
gave "a list of causes of *anion gap* acidosis instead of **NON-***anion* gap acidosis" (1). Similarly YouChat answered the wrong question from our ballroom dance expert by confusing "the Viennese Waltz with the Waltz. The answer describes an entirely different dance from the dance the question is about" (1). Many other answers though were quite accurate. Our geoarcheologist expressed a common sentiment that the responses are "basic but generally correct" (4). Other answers were "excellent, nuanced" (5, toxicologist). The fairly uniform coherence makes it difficult for a non-expert to discern the correct information from the noise.
The answers also varied in their ability to capture uncertainty in the field overall. Our neuroscientist noted that ChatGPT "accurately captured the controversy surrounding use of DREADDs" (5) but that YouChat "was unable to capture the longstanding controversy" (4). The toxicologist noted that ChatGPT offered a "definitive answer to something that is not totally agreed upon" on the subject of dialysis for lithium poisoning (3). By contrast, our linguist observed on a niche question that "the response to the query about complex predicates is appropriately waffly" (5).
We close this section by noting that even for experts, assessing accuracy can be complicated. Our linguist notes "I would say that the response is invalid, but there are linguists who would agree with it and YouChat does flag the fact its controversial" (2) and the geoarcheologist cited overclaiming, writing that YouChat "takes too strong a position that the evidence does not back up" (1). Such cases are difficult to adjudicate—what counts as sufficient evidence?—but the difficulty is inevitable with complex questions.
## 4.3 Sourcing Is Almost Completely Absent
Our clearest finding is that most answers by the language models do not provide any source for their information. Only 11 out of 100 ChatGPT answers and 19 of 100 YouChat answers were scored more than the lowest value for sourcing. Neither system provides a source for Maria Callas' biographical information nor concrete examples of physicallyrealized quantum computers.
When sources are provided, they are often vague, irrelevant, or misleading. The neuroscientist remarked on the first problem writing, "the references are vague; it can cite the names of scientific journals and books but not specific articles or book chapters". When the models provide a source we found that it was often a (only tangentially relevant)
Wikipedia article (Figure 2 provides an anomalous example). These are sometimes loosely related by keywords, but still irrelevant such as a reference to Wikipedia's article on post-traumatic epilepsy for a question about using imaging to diagnose traumatic injuries. In a question on quantum information, a relevant Stephen Hawking paper was recommended, but an unrelated link was provided.
Perhaps the most serious concern is where an authoritative source is invoked, but inaccurately.
When asked "What should a radiologist recommend to a patient after the radiologist incidentally detects a thyroid nodule on a chest CT scan done for another reason?" ChatGPT claims, "The American Thyroid Association recommends that patients with a thyroid nodule larger than 1 cm or with suspicious features on imaging should undergo a fine-needle aspiration (FNA) biopsy." But, "neither the ACR not ATA recommend that patients with a thyroid nodule larger than 1 cm should categorically undergo fine-needle aspiration"! This echoes previous findings in the domain of medicine, where work evaluating previous generations of voice as-
![4_image_1.png](4_image_1.png)
![4_image_2.png](4_image_2.png)
sistants has shown that they provided inaccurate medical answers that could have proven fatal (Bickmore et al., 2018). Our neuroscientist asked a niche question where YouChat identified a specific journal article, but it appears to be made up (neither we, nor she, were able to find it) although she did judge the answer as completely accurate.
## 4.4 Mixed Recommendations For Use
Only 3 of the 10 experts would recommend using ChatGPT and 0 of the 10 would recommend YouChat in professional setting (rating of 4 or higher, where 5 is "full confidence"). However, the majority would endorse both systems for general purposes questions about their domain
(70% rating of 4 or higher)—more than would endorse Wikipedia for the same (60% rating of 4 or higher). The family physician summarized a common theme, "once again Wikipedia has extensive articles on life expectancy extension but nowhere near as concise as this" and the linguist wrote on YouChat's answer, "this is an excellent concise response, although wiki provides more information
(as usual)."
## 5 Discussion
Language models were coherent, but undersourced and not always accurate. They were generally not endorsed for professional use, but were seen as valuable by some experts as a source of knowledge for people out of the domain. Providing sourcing citations will be an important step in building confidence. Citations are sufficiently inconsistent when they appear to merit verifying important results.
Our findings are reinforced by the contemporaneous work of Liu et al. (2023) which provides a
![4_image_0.png](4_image_0.png)
more systematic audit of four generative search engines (including YouChat, but not ChatGPT) on a diverse series of queries (including common google searches and questions on Reddit) using 34 prescreened MTurk annotators. They also find that these search engines are "credible without credit"—
having high fluency and perceived utility, but insufficient sourcing. They find that about half of the responses are fully supported by citations and three fourths of the citations given didn't actually support the sentence. One of their main findings is a negative correlation between citation recall/precision and fluency/perceived utility. Sourcing is so absent in our study that we observe no meaningful correlation with other variables and accuracy has positive correlation with cohesion and conciseness.
Further work could investigate if these discrepancies are due to differences in the systems evaluated, the kinds of questions asked, or the judgments of experts vs. annotators. This difference aside, their findings resonate with ours that credibility without credit should make us cautious in looking to language models as a source of expertise.
## Limitations
Our study has three important limitations. First, our study is small in scope. By their nature, experts are difficult to recruit and consequently the domains we can cover are limited. The small sample also suggests that the quantitative measures may not be stable in a larger or more representative sample.
Second, our observation process was somewhat artificial. We generated replies for our experts and did not to do any prompt tuning. This reflects the way the expert chose to ask the question, but does not capture the ceiling of performance that would be possible in a conversation. As the Family Medicine expert noted about our question comparing Wikipedia to ChatGPT, "for more detail one could spend more time with Wikipedia and to the organization themselves, but chat provides an immediate general summary and the opportunity to drill down further with ongoing questions and conversation.I have used chat GTP to do medical and biological research In a matter of minutes which would have taken me hours previously". A more extensive study on information seeking behaviors would be of interest and Liu et al. (2023) is a useful step in that direction.
Third, the responses across experts are not necessarily comparable. We allowed experts to choose their own questions and provide their own interpretations of the key measures like coherence or conciseness. Comparability of scales across contexts is a long-standing problem in survey research
(King and Wand, 2007) and we highlight some of the concerns around the accuracy question above.
Nevertheless, we felt that asking a set of closedended questions would help to provide some aggregate judgment, adding some systematic data to the anecdotes shared in public forums. While we caution about drawing any binding conclusions from our preliminary work, we felt that given the fast-evolving nature of the technology, a quick assessment was merited. Our findings are broadly supported using different questions and methodology in Liu et al. (2023).
One important aspect that is out of scope in our analysis is differential accuracy by question asker.
Latanya Sweeney's classic study of racial discrimination in online ads (Sweeney, 2013) points to the possibility that how a question is asked or where it is asked *from* could result in inaccurate or harmful answers for marginalized communities (see also Noble, 2018; Benjamin, 2019). We have also focused exclusively on English language questions and answers, but differences in easily-available training data across languages can produce substantial differences in the information offered. For example, Yang and Roberts (2021) shows that embeddings trained on Baidu Baike—an online Chinese encyclopedia—encode substantially different associations with sensitive historical events and people than Chinese Language Wikipedia (which is regularly blocked in China). There is much more to understand about the degree to which large language models can mimic expertise.
## Ethics Statement
Work was approved by Princeton University's IRB
under Proposal 15346. No deception was used in the experiment and we screened language model responses for any sensitive content before passing them to the experts (although we did not encounter any). Participants were not compensated for participation and gave consent to be identified. All appropriate IRB protocols in providing instructions and gathering consent were followed.
## Acknowledgements
Research reported in this publication was supported by The Eunice Kennedy Shriver National Institute of Child Health & Human Development of the National Institutes of Health under Award Number P2CHD047879. Additionally, this material is based upon work supported by the National Science Foundation under Grant \# 2127309 to the Computing Research Association for the CIFellows 2021 Project. This work would not have been possible without all the experts generously donating their busy time. We are extremely grateful to them.
## References
Emily M Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*,
pages 610–623.
Ruha Benjamin. 2019. *Race after technology: Abolitionist tools for the new jim code*. Polity, Cambridge.
Timothy W Bickmore, Ha Trinh, Stefan Olafsson, Teresa K O'Leary, Reza Asadi, Nathaniel M Rickles, and Ricardo Cruz. 2018. Patient and consumer safety risks when using conversational assistants for medical information: an observational study of siri, alexa, and google assistant. *Journal of medical Internet* research, 20(9):e11510.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens
Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
B Barla Cambazoglu, Valeriia Baranova, Falk Scholer, Mark Sanderson, Leila Tavakoli, and Bruce Croft.
2021. Quantifying human-perceived answer utility in non-factoid question answering. In *Proceedings* of the 2021 Conference on Human Information Interaction and Retrieval, pages 75–84.
Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, and Yejin Choi. 2022. Is GPT-3 text indistinguishable from human text? scarecrow: A
framework for scrutinizing machine text. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 7250–7274, Dublin, Ireland. Association for Computational Linguistics.
Harry G Frankfurt. 2005. *On bullshit*. Princeton University Press.
Nico Grant and Cate Metz. 2022. A new chat bot is a 'code red' for google's search business. The New York Times.
Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. 2023. Gpt-4 passes the bar exam. *Available at SSRN 4389233*.
Gary King and Jonathan Wand. 2007. Comparing incomparable survey responses: Evaluating and selecting anchoring vignettes. *Political Analysis*, 15(1):46–
66.
Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel DiazCandido, James Maningo, et al. 2023. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLoS digital health, 2(2):e0000198.
Rensis Likert. 1932. A technique for the measurement of attitudes. *Archives of psychology*, 22(140):5–55.
Nelson F Liu, Tianyi Zhang, and Percy Liang. 2023.
Evaluating verifiability in generative search engines.
arXiv preprint arXiv:2304.09848.
Andrew Loten. 2022. Chatty ai and protein-predicting algorithm defined the year in emerging tech. The Wall Street Journal.
Stephen Marche. 2022. The college essay is dead. The Atlantic.
Donald Metzler, Yi Tay, Dara Bahri, and Marc Najork.
2021. Rethinking search: making domain experts out of dilettantes. In *ACM SIGIR Forum*, volume 55, pages 1–27. ACM New York, NY, USA.
Arvind Narayanan and Sayash Kapoor. 2022. Chatgpt is a bullshit generator. but it can still be amazingly useful. *AI Snake Oil*.
Safiya Umoja Noble. 2018. Algorithms of oppression.
In *Algorithms of Oppression*. New York University Press.
Alexis Palmer and Arthur Spirling. 2023. Large language models can argue in convincing and novel ways about politics: Evidence from experiments and human judgement. Technical report, Working paper),
Technical report.
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902– 4912, Online. Association for Computational Linguistics.
Chirag Shah and Emily M Bender. 2022. Situating search. In *ACM SIGIR Conference on Human Information Interaction and Retrieval*, pages 221–232.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, and et. al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.
arXiv preprint arXiv:2206.04615.
Chris Stokel-Walker. 2023. Chatgpt listed as author on research papers: many scientists disapprove. *Nature*,
613(7945):620–621.
Latanya Sweeney. 2013. Discrimination in online ad delivery. *Communications of the ACM*, 56(5):44–54.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *7th International Conference on Learning Representations,*
ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
OpenReview.net.
Eddie Yang and Margaret E Roberts. 2021. Censorship of online encyclopedias: Implications for nlp models. In *Proceedings of the 2021 ACM Conference on* Fairness, Accountability, and Transparency, pages 537–548.
## A Appendix A.1 Study Participants
The experts who wrote and evaluated these questions are enumerated in Table 2.
## A.2 Reproducibility
We provided three reproductions of the first questions for evaluation. An example is provided in Table 3.
## A.3 Data Examples
We provide example ChatGPT and YouChat answers for the opera and quantum information examples in Table 4.
## A.4 Data Collection
These questions were sourced through the form shown in Figures 3 and evaluated through the form in Figure 4.
## A.5 Additional Data
We release our data publicly at https://denis.ai/bot_thoughts/.
| Speciality | Name |
|--------------------|--------------------------|
| Family Medicine | Dr. D. Andrew MacFarlan |
| Radiology | Dr. Dionysios Koroulakis |
| Toxicology | Dr. David Schaffer |
| Geoarchaeology | Dr. Timothy Beach |
| Neuroscience | Elizabeth Tawa |
| Quantum Info | Dr. Andrew Guo |
| Linguistics | Dr. Adele Goldberg |
| Digital Humanities | Dr. Ryan Heuser |
| Ballroom | Dan Calloway |
| Opera | Fred Plotkin |
Table 2: Participants and their specialties.
## Regeneration Of Question
Human: "Can imaging studies help diagnose traumatic injuries?"
ChatGPT: *"Yes, imaging studies such as Xrays, CT scans, and MRI can be used to help* diagnose traumatic injuries. They can be used to identify broken bones, internal bleeding, and other injuries that may not be visible on the surface of the body. Imaging studies can also be used to monitor the healing process and determine the severity of an injury."
ChatGPT: "Imaging studies such as X-rays, CT scans, and MRIs can help diagnose traumatic injuries by providing detailed images of the affected area. These images can reveal bone fractures, dislocations, and injuries to internal organs and soft tissue. They can also be used to monitor the healing process and detect complications. In some cases, a combination of imaging studies may be needed to fully evaluate the extent of an injury."
ChatGPT: "Yes, imaging studies such as Xrays, CT scans, and MRI scans can help diagnose traumatic injuries. These imaging techniques can provide detailed pictures of the bones, soft tissue, and internal organs, which can help physicians identify and diagnose injuries such as broken bones, head injuries, internal bleeding, and other conditions. However, it's important to note that imaging studies are not always necessary for the diagnosis of traumatic injuries and the decision to use imaging should be based on the individual patient's history and clinical presentation."
Table 3: Regeneration of a question by ChatGPT.
| Niche Areas in your Field | » | .. | |
|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------|
| Questions about your field | * | ! | Mease provide FIVE niche questions about your feld. A niche question is likely unknown to somebody outside. |
| Please provide FIVE common questions relevant in your fuld. A common question is something familiar and | of your area of exp | | |
| chemical?) | | | |
| A short niche/highly specialized question with an open ended answer * | | | |
| A short common question asked in your field with an open ended answ | Short answer text | | |
| Short answer text: | A second short nicht/highly specialized question with a direct answer (e.g., yes/no) " | | |
| A second short common question that asks for a direct (yes/no/a number/etc.) answer * | Short answer 100 | | |
| Short and | | | |
| A long (10+ words) common question " | A long (10+ words) niche/highly specialized question * | | |
| Shot | | | |
| Short anawar text: | A niche question about which experts disagree " | | |
| A common question about which experts disagree * | Short answer text | | |
| Short onse | A niche question that asks for a source of information for the answ | | |
| A common question phrased as somebody unfamiliar with the domain would ask it * | | | |
| Short answer text | Short answer text | | |
| (a) Prompts for common questions. | (b) Prompts for niche questions. | | |
| Section 13 of 13 | | | | | | | | | | | |
|------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|---------------------------------|----|-------------------------------|----------------------------------|--------------------------------|----|----|----|----------------------|
| Question 1 | : | | | | | | | | | | |
| s | Feedback | . | | | | | | | | | |
| Description (optional) | Dearston (optional) | | | | | | | | | | |
| The anower to the question is coherent * | Seeing the responses by ChatGPT, would you change what quest | . | | | | | | | | | |
| 1 | 11, 112 | 3 | 4 | S | phrased your quest. | | | | | | |
| Con | incoherent and hard to | Long anower fact | | | | | | | | | |
| 00 | Extremely easy to read. Senten | Based on these arswers, I would recommend somebody outside of my domain use ChatGPT | | | | | | | | | |
| The answer to the question is concise * | is about my don | | | | | | | | | | |
| 1 | 2 | a | 4 | s | | | | | | | |
| - | 2 | a | 4 | 5 | o | o | o | o | o | | |
| Lata of unrelated content in the | - ○ - ○ - ○ The anower is anowered as con | Not at all | With Pull Confide | | | | | | | | |
| Based on these an | ers, I w | nd using ChatGPT in a profes | al setting | | | | | | | | |
| The accuracy of the response | se is factually accurate " | 1 | 2 | s | + | , | | | | | |
| ' | 2 | s | - | s | o | o | o | o | o | | |
| Hot of th | with full canfidence | | | | | | | | | | |
| o | o | o | o | o | | | | | | | |
| Completely Inaccurate | lased on these anowers, content generated by ChatGPT reads like it is written by a human | . | | | | | | | | | |
| The response Indic | be of information " | rather then artificially created. | | | | | | | | | |
| bs | - | 2 | s | | | | | | | | |
| , | 2 | 8 | 4 | s | Unbelevable as a human answer | 00 | Indistinguishable from a human | | | | |
| o | o | o | o | o | | | | | | | |
| Na source | Clearly clefn | I would recommend somebody outside of my domain use Wikipedia for general purpose | . | | | | | | | | |
| ations about my domain | | | | | | | | | | | |
| This answer provides more information than the relevant section of Wikpedia * | 1 | s | , | . | , | | | | | | |
| - | 2 | . | 4 | 5 | Net at all | o | o | o | o | o | With full confidence |
| No addit | tive to | 00 | More accurate and thorough | | | | | | | | |
| Wikipedia | informatic | ChatGPT provides more infor- | Mikipedia for these questions " | | | | | | | | |
| Optional Free form Commentary on the Answer | 1 | s | , | . | , | | | | | | |
| Long anover text | o | o | o | o | o | More decaded, more accurate, and | | | | | |
| The same or less | ed than M | | | | | | | | | | |
| (a) Feedback is provided along a Likert scale for each question. (b) A summary judgement is made after all questions. | | | | | | | | | | | |
Human: "Was Maria Callas American?"
ChatGPT: "No, Maria Callas was not American. Maria Callas, whose full name was Maria Anna Cecilia Sofia Kalogeropoulos, was born on December 2, 1923, in New York City, United States. However, she was of Greek descent. Her parents were Greek immigrants, and she later acquired Greek citizenship. Maria Callas is renowned as one of the greatest opera singers of the 20th century and was particularly known for her performances in the bel canto repertoire." YouChat: "Yes, Maria Callas was American-born. She was born in Manhattan, New York City, to Greek immigrant parents. However, she later established her career in Italy and became known as a famous Greek soprano."
Human: "Can quantum computers be realized in the physical world?"
ChatGPT: "Quantum computers are a theoretical concept that has not yet been fully realized in the physical world. However, significant progress has been made in recent years in the development of quantum computing hardware, and several companies and research groups have built prototype quantum computers that are capable of running small-scale quantum algorithms. These prototypes are often referred to as "Noisy Intermediate-Scale Quantum"
(NISQ) computers, as they are not yet capable of achieving the full potential of quantum computing, but they are able to perform some quantum computations that are beyond the capabilities of classical computers.
There are several different physical implementations of quantum computing being researched, including superconducting qubits, trapped ions, topological qubits and so on. Each of these approaches has its own set of challenges and limitations, and it is not yet clear which approach will ultimately be the most successful in building large-scale, practical quantum computers.
It's worth to note that, currently, the most advanced quantum computers are still far from being able to perform any practical computations, and it's still uncertain when, or even if, fully-functional, large-scale, fault-tolerant quantum computers will be built."
YouChat: *"Yes, quantum computers can be realized in the physical world. Researchers have* been able to build prototype systems that demonstrate the principles of quantum computing, and there are a number of research groups and companies working on building more advanced versions. Additionally, several companies have announced plans to build full-scale quantum computers in the next few years."
Table 4: The full ChatGPT and YouChat responses to the two in-paper examples.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
LIMITATIONS
✓ A2. Did you discuss any potential risks of your work?
ETHICS
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Ethics
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix, Section 2
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Ethics / IRB
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Ethics/ IRB
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Ethics D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Participants are identified directly. Appendix A |
murty-etal-2023-grokking | Grokking of Hierarchical Structure in Vanilla Transformers | https://aclanthology.org/2023.acl-short.38 | For humans, language production and comprehension is sensitive to the hierarchical structure of sentences. In natural language processing, past work has questioned how effectively neural sequence models like transformers capture this hierarchical structure when generalizing to structurally novel inputs. We show that transformer language models can learn to generalize hierarchically after training for extremely long periods{---}far beyond the point when in-domain accuracy has saturated. We call this phenomenon structural grokking. On multiple datasets, structural grokking exhibits inverted U-shaped scaling in model depth: intermediate-depth models generalize better than both very deep and very shallow transformers. When analyzing the relationship between model-internal properties and grokking, we find that optimal depth for grokking can be identified using the tree-structuredness metric of CITATION. Overall, our work provides strong evidence that, with extended training, vanilla transformers discover and use hierarchical structure. | # Grokking Of Hierarchical Structure In Vanilla Transformers
Shikhar Murty† Pratyusha Sharma‡ Jacob Andreas‡ **Christopher D. Manning**†
†Computer Science Department, Stanford University ‡MIT CSAIL
{smurty, manning}@cs.stanford.edu, {pratyusha, jda}@mit.edu
## Abstract
For humans, language production and comprehension are sensitive to the hierarchical structure of sentences. In natural language processing, past work has questioned how effectively neural sequence models like transformers capture this hierarchical structure when generalizing to structurally novel inputs. We show that transformer language models can learn to generalize hierarchically after training for extremely long periods—far beyond the point when indomain accuracy has saturated. We call this phenomenon *structural grokking*. On multiple datasets, structural grokking exhibits inverted U-shaped scaling in model depth: intermediatedepth models generalize better than both very deep and very shallow transformers. When analyzing the relationship between model-internal properties and grokking, we find that optimal depth for grokking can be identified using the tree-structuredness metric of Murty et al.
(2023). Overall, our work provides strong evidence that, with extended training, vanilla transformers discover and use hierarchical structure.
## 1 Introduction
Although human language is produced as a linear sequence, it is hierarchically organized. Smaller units compose to form larger constituents. The ability to infer this hierarchical structure underlies our ability to produce and understand new sentences
(Chomsky, 1965; Crain and Nakayama, 1987). In this paper, we investigate whether standard neural transformer models (Vaswani et al., 2017) can also generalize hierarchically when trained on language processing tasks (Fig 1). Our main finding is that hierarchical generalization in transformers does occur, but very slowly: performance on structurally novel sentences increases gradually, long after performance on sentences from the training distribution has plateaued. We term this phenomenon *structural grokking*, by analogy to existing findings on simple classification tasks (Power et al., 2022).
![0_image_0.png](0_image_0.png)
Figure 1: Examples from language modeling datasets we use to assess hierarchical generalization in vanilla transformers. These datasets are constructed so that both a non-hierarchical as well as a hierarchical rule can perfectly fit the training set, but only the hierarchical rule generalizes to structurally novel inputs.
On two datasets, we show that structural grokking exhibits inverted U-shaped scaling behavior as a function of model depth: hierarchical generalization improves, then declines, as we train deeper models. Prior work suggests that a number of model-internal properties might track the emergence of hierarchical structure in transformers, including weight norms (Merrill et al.,
2021; Liu et al., 2022; Power et al., 2022), attention sparsity (Merrill et al., 2021), and functional treestructuredness (Murty et al., 2023). We find that functional tree-structuredness is uniquely able to predict structural grokking—while weight norms and attention sparsity increase monotonically in model depth, tree-structuredness is highest for models of the optimal depth for structural grokking.
Our results challenge findings from prior work
(Mueller et al., 2022; Petty and Frank, 2021) claiming that ordinary transformers completely fail on the tests of hierarchical generalization that we study.
We attribute these failures to early stopping based on in-domain validation performance, which signif439 icantly underestimates hierarchical generalization due to structural grokking. On the datasets where this prior work reports generalization accuracies below 20%, *simply by training for longer*, mean accuracy across random seeds reaches 80%, and several seeds achieve near-perfect generalization performance. Past findings are also partially explained by U-shaped scaling: this work uses models that are too shallow (Mueller et al., 2022; Petty and Frank, 2021) or too deep (Mueller et al., 2022).
Our results align with past findings on the role of extended training in other language processing problems (Csordás et al., 2021; Hoffmann et al.,
2022).
## 2 Background
Transformers Given a sequence of tokens w≤i = w1, w2*, . . . , w*i, where each token is drawn from a fixed vocabulary V , an L-layer transformer language model (LM) f L
θoutputs a distribution over the next token wi+1 ∈ V , f L
θ
(w≤i) ∈ R|V |.
A key part of the architecture is a sequence of L
self-attention layers, where layer p computes contextual vectors of token k as a non-linear parametric function of a convex combination of contextual vectors of tokens w≤k from the previous layer, where coefficients a p k ∈ R
kare known as the *attention distribution*. The LM weights are learned by maximizing the log probability of the correct continuation wk+1, given prefix w≤k.
Hierarchical structure in transformers While unsupervised pre-training of transformers has led to state-of-the-art transfer learning results across NLP, the architecture itself has been claimed to lack human-like inductive biases toward hierarchical structure (Tran et al., 2018; Hahn, 2020; Petty and Frank, 2021; Mueller et al., 2022). We revisit these claims in this work.
To understand whether a given model has a bias for acquiring hierarchical structure, we follow McCoy et al. (2020) and evaluate generalization in models trained on ambiguous tasks in which training data is consistent with both a "hierarchical rule" as well as a "non-hierarchical rule" (Fig 1). To test if the hierarchical rule has been acquired, we test generalization on a separate out-of-distribution test set, constructed such that only learners that have acquired the hierarchical rule are successful.
Grokking Power et al. (2022) identify the phenomenon of *grokking* on small algorithmic datasets where they find that test performance improves long after training performance has saturated. We hypothesize a similar *structural grokking*, where the model groks hierarchical structure long after in-domain validation performance has saturated, and consequently, hierarchical generalization can continue to improve with extended training.
## 3 Experiments
Datasets Since our goal is to understand hierarchical generalization in transformers, we use two datasets from (McCoy et al., 2020) and additionally evaluate on a simple bracket-tracking task. For *Dyck*, models are trained to predict next tokens in strings drawn from Dyck20,10, the language of well-nested brackets with 20 types and max nesting depth of 10. We evaluate generalization to structurally unobserved strings in Dyck20,10
(see Fig 1 for examples and Appendix A for details). For the McCoy et al. (2020) datasets, in Question-Formation, models must convert English sentences into questions and, in *Tense-Inflection*,
models must map from sentences and tense markers to appropriately re-inflected sentences. We evaluate generalization on the out-of-distribution test set from McCoy et al. (2020).
Model We train transformer LMs with {2, 4, 6, 8, 10} layers (see Appendix B for more details). For each depth, we train models with 10 random seeds for 300k (400k for Dyck) steps. Given the input sentence (or prefix in the case of Dyck) we decode greedily from the model at test time. For Dyck, we report the accuracy of generating the correct closing bracket type by ranking among closing brackets, given an input prefix from the language. As done in prior work (McCoy et al., 2020; Petty and Frank, 2021; Mueller et al., 2022), for QuestionFormation, we report first word accuracy of the decoded question, and for Tense-Inflection, we report the fraction of test inputs for which the target verb is correctly inflected.
## 3.1 Main Results
Transformers exhibit structural grokking We first present results obtained with the best model depth on all datasets in Fig 2. We find clear evidence of structural grokking: Across datasets, generalization improves many training steps after indistribution accuracy has saturated, sometimes approaching perfect accuracy.
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
Early stopping considered harmful Next, we compare generalization accuracy obtained by early stopping on in-domain validation accuracy (as done in Petty and Frank (2021); Mueller et al. (2022)) to longer training runs (Fig 2). Early stopping leads to vastly underestimating generalization. For instance, average generalization goes up from <40%, <50%
to <90%, <80% on Question-Formation and TenseInflection, respectively.
Inverted U-shaped scaling On QuestionFormation and Tense-Inflection, we train models of increasing depths from 2 to 10 layers. For each depth, we report the fraction of seeds (out of 10) where generalization accuracy eventually crosses 80%, in Fig 3a. We find an inverted U-shaped scaling behavior—very shallow and very deep models are unsuccessful, while most seeds generalize in models of intermediate depth. This may also explain why prior work that either used very shallow models (1–3-layer transformers in Petty and Frank (2021); Mueller et al. (2022))
or very deep models (12-layer transformers in Mueller et al. (2022)) failed to generalize well.
## 4 Analysis
Given that structural grokking occurs only in a subset of model architectures, can we identify when it has happened (or predict when it will occur)? Several model-internal properties have been claimed to relate to either grokking or emergent hierarchical Weight Norms Recent work (Power et al., 2022; Liu et al., 2022) identifies the L2 norm of parameter weights as an important quantity for grokking.
For instance, Power et al. (2022) find weight decay to improve grokking speed and Liu et al. (2022)
identify a "goldilocks zone" in weight norm space where grokking occurs. More generally, norm growth over the course of training has been studied as a key factor in neural network generalization
(Soudry et al., 2018).
Attention Sparsity Merrill et al. (2021) prove that norm growth in transformers leads to attention saturation, an important property for emergent linguistic structure (Merrill et al., 2022). As a proxy for attention sparsity of f L
θ
, we compute the negative mean entropy of all distributions {a p k}.
Tree-structuredness McCoy et al. (2020) show that tree-structured encoders such as Tai et al.
(2015) show near perfect hierarchical generalization. While transformers are relatively unconstrained, recent evidence suggests that, when trained on language data, they implictly implement
(approximately) tree-structured computations. In particular, the *tree projection* method of Murty et al. (2023) precisely characterizes the extent to which a transformer's internal computation on an input can be approximated with a tree-structured neural encoding, providing a tree-structuredness score (tscore) for any transformer, and a binary tree that best approximates its computation on an input string (see Appendix C for details). To evaluate whether these trees correspond to human notions of syntax, we additionally compare recovered trees to gold-standard ones (tparseval, Black et al., 1991).
## 4.1 Results
We characterize the *dynamics* of weight norms
(normalized by number of layers to compare different model depths), attention sparsity, and treestructuredness, by computing these quantities every 3k gradient updates for Question-Formation and Tense-Inflection. For data-dependent properties such as attention sparsity and tree-structuredness, we sample 10k examples from the training data.
We plot these quantities for the smallest model, the largest model for which at least one run shows successful grokking, and for the optimal model depth, in Fig 3b.
![3_image_0.png](3_image_0.png)
Optimal models are most tree-structured Weight norms and attention sparsity grow for all model settings in both datasets. However, these properties by themselves are unable to predict that both shallow and deep models fail—shallow models learn the sparsest solutions as well as solutions with largest weight norms, but never generalize hierarchically. As noted by Murty et al. (2023),
tscore improves over time for all models, indicating increased tree-structuredness over time. For both datasets, the "optimal" model learns the most tree-structured solution compared to both deep and shallow models. Liu et al. (2022) note that, on algorithmic tasks, grokking "coincides with the emergence of structure in embeddings". Similarly, for language tasks, we find that structural grokking coincides with the emergence of tree structured internal computations.
Transformers are surprisingly effective at structure induction From the dynamics of tparseval in Fig 4, we note that all models, regardless of whether they generalize or not, learn structures that are close to ground truth syntax, sometimes outperforming a right-branching baseline. McCoy et al.
(2020) note that tree-structured encoders only generalize when structured according to correct parse trees. Here, we find that all transformers learn correct tree structures, but only the ones that are the most tree-structured generalize best.
## 5 Conclusion
This work shows that transformers are capable of exhibiting *structure-sensitive* "hierarchical generalization" via a grokking mechanism. Their overall learning behavior gradually shifts from memorization (high in-domain accuracy, poor out-of-domain accuracy) to generalization (high in-domain and out-of-domain accuracy). While we show such behavior on relatively small datasets with small models, we believe these results may have broader implications, as training for longer has been shown to help even for web-scale language modeling (Hoffmann et al., 2022) and on compositional generalization tasks (Csordás et al., 2021). Structural grokking happens most often at "medium-sized" model depths, and both very shallow and very deep models fail to exhibit it. While properties previously connected with linguistic generalization in transformers such as weight norms and attention sparsity do not differentiate good architectures from bad ones, functional tree-structuredness of the transformer can well predict the optimal model depth. While there are clear limitations to the transformer architecture (such as the inability to implement unbounded recursion), our results show that it may have stronger inductive biases than previously believed: With sufficient training, transformers can represent hierarchical sentence structure and use this structure to generalize correctly.
## 6 Reproducibility
All code and data for these experiments is available at https://github.com/MurtyShikhar/
structural-grokking.git.
## 7 Acknowledgements
SM was funded by a gift from Apple Inc. CM is a fellow in the CIFAR Learning in Machines and Brains program. We thank John Hewitt, Belinda Li, Rishi Bommasani and members of the Stanford NLP group for feedback on the paper.
## Limitations
Our work has the following limitations. First, we only evaluate generalization on datasets based on English language. Second, we show structural grokking on three datasets, and while we believe this to be a general phenomenon, we leave investigating similar behavior on other datasets for future work. Next, we also do not study the effect of training data size on structural grokking, and do not investigate whether transformers learn to grok hierarchical structure in low data regimes. Finally, all datasets here are based on context-free grammars, either similar to or taken directly from prior work, and we believe constructing similar generalization benchmarks on real language data is a good avenue for future work.
## References
E. Black, S. Abney, D. Flickenger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In *Speech and Natural Language: Proceedings of a Workshop Held at Pacific* Grove, California, February 19-22, 1991.
Noam Chomsky. 1965. *Aspects of the Theory of Syntax*.
The MIT Press, Cambridge.
Stephen Crain and Mineharu Nakayama. 1987. Structure dependence in grammar formation. *Language*,
63(3):522–543.
Róbert Csordás, Kazuki Irie, and Juergen Schmidhuber.
2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 619–
634, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Michael Hahn. 2020. Theoretical limitations of selfattention in neural sequence models. *Transactions of* the Association for Computational Linguistics, 8:156–
171.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training computeoptimal large language models. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Ziming Liu, Eric J Michaud, and Max Tegmark. 2022.
Omnigrok: Grokking beyond algorithmic data. In The Eleventh International Conference on Learning Representations.
R. Thomas McCoy, Robert Frank, and Tal Linzen. 2020.
Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. *Transactions of the Association for Computational Linguistics*.
William Merrill, Vivek Ramanujan, Yoav Goldberg, Roy Schwartz, and Noah A. Smith. 2021. Effects of parameter norm growth during transformer training:
Inductive bias from gradient descent. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1766–1781, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
William Merrill, Ashish Sabharwal, and Noah A. Smith.
2022. Saturated transformers are constant-depth threshold circuits. Transactions of the Association for Computational Linguistics, 10:843–856.
Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, and Sebastian Schuster. 2022. Coloring the blank
slate: Pre-training imparts a hierarchical inductive bias to sequence-to-sequence models. In Findings of the Association for Computational Linguistics: ACL
2022, pages 1352–1368, Dublin, Ireland. Association for Computational Linguistics.
Shikhar Murty, Pratyusha Sharma, Jacob Andreas, and Christopher D Manning. 2023. Characterizing intrinsic compositionality in transformers with tree projections. In *The Eleventh International Conference on* Learning Representations.
Jackson Petty and Robert Frank. 2021. Transformers generalize linearly. *arXiv preprint* arXiv:2109.12036.
Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. 2022. Grokking:
Generalization beyond overfitting on small algorithmic datasets. *arXiv preprint arXiv:2201.02177*.
Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain.
Association for Computational Linguistics.
Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. 2018. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 19(1):2822–
2878.
Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–
1566, Beijing, China. Association for Computational Linguistics.
Ke M Tran, Arianna Bisazza, and Christof Monz. 2018.
The importance of being recurrent for modeling hierarchical structure. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 4731–4736.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30.
![6_image_0.png](6_image_0.png)
Table 1: Statistics for all datasets used in this work.
## A Dataset Details
All statistics are in Table 1. For QuestionFormation and Tense-Inflection, we use splits as given in McCoy et al. (2020) with no additional preprocessing. We give details of Dyck below.
Dyck Details We construct our Dyck dataset by sampling 200k strings from Dyck20,10, the language of well-nested brackets with 20 different bracket types and nesting depth atmost 10. For each string, we define its structure as a binary vector of 0s and 1s. For instance the structure of "(([]))"
is "11110000". To construct a generalization set, we sample strings with unobserved structures i.e.
strings whose 0-1 structure does not match the structure of any of the training strings. Since the objective at test time is to measure closing bracket accuracy, we only rank model probability among all closing brackets, and only evaluate on prefixes where the opening bracket is atleast 10 tokens away from its corresponding closing bracket.
## B Model Details
We use a transformer language model with the following hyperparameters:
- Number of attention heads = 4
- Hidden dimensionality = 512
- Tied input and output matrices as done in Press and Wolf (2017)
Next, we use the following hyperpameters for optimization:
- AdamW (β1: 0.9, β2: 0.999, ϵ: 1e-7), with learning rates in {1e-4, 5e-5, 1e-5}, noting that 1e-4 works best for all experiments. We use a linear warmup scheduler warming up from 0 to the final learning rate over 10k gradient steps.
- We clip gradients to have a max L2 norm of 10.
## C Functional Tree-Structuredness
Tree Projections (TP; Murty et al. (2023)) measure how well computations performed by a given transformer f can be approximated with tree-structured encoders. To do this, TP solves the following optimization problem:
$$\phi_{\mathrm{proj}},T_{\mathrm{proj}}\triangleq\operatorname*{arg\,min}_{\phi,T}{\mathcal{L}}(f,g_{\phi},T),\qquad(1)$$
where gϕ is the class of tree structured encoders that processes sentence S according to bottom-up trees T(S), and L is a distance function between vector outputs of f and gϕ on spans from the binary tree T. TP minimizes Equation 1 approximately, and recovers an approximate Tbproj. The tree score over a dataset D is defined as
$$\Phi_{\mathrm{score}}\triangleq{\frac{\sum_{S\in{\mathcal{D}}}\mathbb{E}_{T}\mathrm{SCI}(S,T)-\mathrm{SCI}(S,{\widehat{T}}_{\mathrm{proj}}(S))}{|{\mathcal{D}}|}},$$
where SCI (span contextual invariance) is the distance between contextual and context-free vector representations of all spans p in T (for more details, see Murty et al. (2023)). In particular, SCI score for a sentence S structured according to T(S) is
$$\operatorname{SCI}(S,T)\triangleq\sum_{s\in T}d(\mathbf{v}_{p}^{S},{\tilde{\mathbf{v}}}_{p})$$
$$(3)$$
p, v˜p) (3)
for some suitably chosen distance function d (here, cosine similarity). To measure the bracketing F1 score (PARSEVAL; Black et al. (1991)) of the induced tree projection of the transformer Tbproj against ground truth gold syntax trees, Tg, when available, Murty et al. (2023) define
$$\mathbb{L}({\widehat{T}}_{\mathrm{proj}},T_{g},{\mathcal{D}}).$$
$$(4)$$
tparseval ≜ PARSEVAL(Tbproj, Tg, D). (4)
## D Training Loss Curves
We explore the hypothesis that syntactic grokking is simply a result of the training loss continuing to decrease, even after in-domain validation performance has saturated in Fig 5. We note that training losses generally saturate before in-domain validation performance saturates (also noted in Power et al. (2022)). Next, we also find that all models, regardless of whether they grok or not, eventually get to comparable training losses. We conclude that the inverted U-shaped trend is not an artifact of poorly optimized models.
- We use a batch size of 8.
![7_image_0.png](7_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section-6
✗ A2. Did you discuss any potential risks of your work?
Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes (in Section-1 and Abstract)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We trained several transformer models, the checkpoints of which will be available upon de-anonymization
✓ B1. Did you cite the creators of artifacts you used?
We cite all prior work whose datasets we use in Sections-2 and 3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
deb-etal-2023-zero | Zero-shot Cross-lingual Transfer With Learned Projections Using Unlabeled Target-Language Data | https://aclanthology.org/2023.acl-short.39 | Adapters have emerged as a parameter-efficient Transformer-based framework for cross-lingual transfer by inserting lightweight language-specific modules (language adapters) and task-specific modules (task adapters) within pretrained multilingual models. Zero-shot transfer is enabled by pairing the language adapter in the target language with an appropriate task adapter in a source language. If our target languages are known apriori, we explore how zero-shot transfer can be further improved within the adapter framework by utilizing unlabeled text during task-specific finetuning. We construct language-specific subspaces using standard linear algebra constructs and selectively project source-language representations into the target language subspace during task-specific finetuning using two schemes. Our experiments on three cross-lingual tasks, Named Entity Recognition (NER), Question Answering (QA) and Natural Language Inference (NLI) yield consistent benefits compared to adapter baselines over a wide variety of target languages with up to 11{\%} relative improvement in NER, 2{\%} relative improvement in QA and 5{\%} relative improvement in NLI. | # Zero-Shot Cross-Lingual Transfer With Learned Projections Using Unlabeled Target-Language Data
Ujan Deb∗
IIT Bhilai [email protected] Ridayesh Parab∗
IIT Bombay [email protected] Preethi Jyothi IIT Bombay [email protected]
## Abstract
Adapters have emerged as a parameter-efficient Transformer-based framework for cross-lingual transfer by inserting lightweight languagespecific modules (language adapters) and taskspecific modules (task adapters) within pretrained multilingual models. Zero-shot transfer is enabled by pairing the language adapter in the target language with an appropriate task adapter in a source language. If our target languages are known apriori, we explore how zeroshot transfer can be further improved within the adapter framework by utilizing unlabeled text during task-specific finetuning. We construct language-specific subspaces using standard linear algebra constructs and selectively project source-language representations into the target language subspace during task-specific finetuning using two schemes. Our experiments on three cross-lingual tasks, Named Entity Recognition (NER), Question Answering (QA) and Natural Language Inference (NLI) yield consistent benefits compared to adapter baselines over a wide variety of target languages with up to 11% relative improvement in NER, 2%
relative improvement in QA and 5% relative improvement in NLI.
## 1 Introduction
Zero-shot cross-lingual transfer refers to the transfer of task-specific knowledge from a (highresource) source language to a (zero-resource) target language that has no labeled task-specific data for training. A popular paradigm for cross-lingual transfer learning is to finetune pretrained multilingual models using labeled task-specific data in the source language and directly evaluate these finetuned models on target language test sets. A parameter-efficient alternative to full finetuning for cross-lingual transfer is MAD-X (Pfeiffer et al., 2020b), an adapter-based framework that scaffolds on multilingual pretrained models to combine task-specific and language-specific modules in a plug-and-play manner. Adapters (Houlsby et al., 2019) are feedforward layer blocks inserted within each Transformer layer to selectively learn taskspecific and language-specific capabilities via task adapters and language adapters, respectively. Language adapters are trained using self-supervised objectives like masked language modeling (MLM)
and task adapters are trained using task-specific objectives. To enable task transfer to a target language, the relevant language and task adapters are combined at test-time.
In the zero-shot setting, we assume access to unlabeled text in the target languages. In MADX, this text is only used to train target language adapters and not further used during finetuning. Given knowledge of which languages we want to target, can we make effective use of unlabeled text in the target languages even during task-specific finetuning? This is the main question we tackle in this work.
We propose a general adapter-based technique to inject target language bias into task-specific finetuning. Using the unlabeled text in each target language, we construct an affine subspace from contextualized representations for every Transformer layer in the multilingual model. These subspaces are defined using singular value decomposition
(SVD) and only need to be computed once per target language. During task-specific finetuning using labeled data in the source language, we project the source representations onto the target language subspaces. This projection can be invoked randomly using a projection probability defined as a hyperparameter. Projections can also be triggered depending on whether the current source representations are closer to the mean embedding of the source language subspace compared to the mean embedding of the target language subspace. We investigate both these projection policies and find that they both improve performance across multiple tasks in multiple languages compared to state-ofthe-art adapter baselines. We also release code1to reproduce our experiments.
## 2 Methodology
Adapters and MAD-X. Adapters for language models (Houlsby et al., 2019) are bottleneck feedforward modules, typically inserted in each Transformer layer of a multilingual model before layer normalization. Instead of finetuning the entire model, only adapters are tuned for a specific task. Pfeiffer et al. (2020b) extended adapterbased fine tuning to support cross-lingual transfer. Their framework called MAD-X (Multiple Adapters for Cross-lingual transfer) comprises of language adapters and task adapters. Language adapters are pretrained using masked language modeling to learn language-specific features. Task adapters are stacked on top of language adapters during downstream task finetuning to learn taskspecific information. To achieve zero-shot transfer, the model is trained with a frozen source-language language adapter and a task adapter. During test time, the source-language adapter is replaced with the target-language adapter and evaluated on test instances in the target language.
Overview of our technique. We are interested in the setting where we have apriori knowledge of which languages we want to target at test time. We aim to bias cross-lingual transfer towards known target languages during task-specific finetuning.
We start with MAD-X as our underlying framework and adopt the following 3-step approach:
- We construct layer-specific subspaces for each of the target languages. This is done by computing SVD on contextualized token representations extracted from each layer. See §2.1 for more details.
- During task-specific training, we selectively project output representations from the language adapter of a chosen layer onto the target language subspace. These projections are triggered based on two policies: Random projection (§2.2) and Mean Cosine Distance (§2.3). The projected representations are further passed through the task adapter that is trained using labeled data in the source language.
- Similar to MAD-X, we evaluate on the target language by simply swapping the source language 1https://github.com/csalt-research/
adapter-projections adapter with the target language adapter while keeping the task adapter fixed. No projection is done during inference.
## 2.1 Language Subspaces And Projections
Our objective is to bias the model towards the target language while fine-tuning for a task. For this, we need to extract language-specific information from model representations that jointly exhibit language-specific and language-independent properties. Language-specific subspaces have been typically used to analyze representations in multilingual language models. Choenni and Shutova
(2020) showed that individual representations can be used to predict linguistic typological features after projecting onto language-sensitive subspaces.
Chang et al. (2022) construct language subspaces with SVD using language-specific contextualized token embeddings. They analyze model performance and other properties after computing layerwise projections of representations to various language subspaces.
We construct subspaces for each of the target languages using SVD and contextualized token representations for unlabeled text in the target language.
Consider a pretrained model like XLMR (Conneau et al., 2020) that takes text sequences from the target language as its input. d-dimensional embeddings from a particular layer for a given language A can be grouped into a matrix MA ∈ Rn×d. SVD
of MA (after subtracting the mean representation for A) can be written as: MA = UAΣVTA
. The right singular matrix VA is considered to be the subspace for language A. These subspaces only need to be computed once for each layer. Next, we look at when projections should be invoked.
## 2.2 Random Projection
For a given target language, during finetuning using task-specific data in the source language, we project the source representations onto the target language subspace with a predetermined probability p. This projection is invoked right before passing the representation through the task adapter, having already passed through the language adapter.
To project onto a target subspace, we first shift the target language subspace so that it passes through the source language mean embedding and then take the projection onto the target subspace (Chang et al., 2022). Let S be the source language and Q
be the target language. Let subspaces and means of representations from one of the Transformer layers for the source language be VS and µS
, respectively.
Projection of a representation x on S is given by:
$$\mathrm{Project}_{S}(\mathbf{x})=\mathbf{V}_{S}\mathbf{V}_{S}^{T}(\mathbf{x}-\boldsymbol{\mu}_{S})+\boldsymbol{\mu}_{S}$$
The projection of x onto the target language subspace, that is shifted onto the source subspace, can be computed as:
$$\mathrm{Project}_{Q,\mu_{S}}(\mathbf{x})=\mathbf{V}_{Q}\mathbf{V}_{Q}^{T}(\mathbf{x}-\mu_{S})+\mu_{S}$$
The main intuition here is that by probabilistically projecting source representations onto the target language subspace during task-specific finetuning, the model can encode both source and target language information in its representations. The model cannot solely rely on source-language specific features during task-specific training.
## 2.3 Mean Cosine Distance (Mcd)
We suggest another projection scheme, Mean Cosine Distance (MCD), that is more informed than randomly projecting source representations based on a projection probability p. Using MCD, we project those embeddings that are deemed as being further away from the target language subspace compared to the source language subspace. This is quantified using a cosine distance between an embedding from a layer and means of source and target language subspaces. If an embedding is closer to the source language mean compared to the target language mean, we project it onto the target language subspace so as to make it more similar to target language embeddings. However, if an embedding is closer to the target language mean, we can possibly omit projection since it already contains information relevant to the target language.
Consider a set of embeddings extracted from one of the Transformer layers. Let the means of all embeddings from this layer and the associated subspace be denoted by µ and V, respectively. µS
and µQ denote the means for the source and target language, respectively. Similarly, VS and VQ refer to the respective subspaces. Let x denote a token embedding from the source language. The MCD
policy can be written as:
$$\mathbf{x}={\begin{cases}\mathrm{Project}_{Q,\boldsymbol{\mu}_{S}}(\mathbf{x})&{{\mathrm{if~c}}(\mathbf{x},\boldsymbol{\mu}_{Q})<\mathrm{c}(\mathbf{x},\boldsymbol{\mu}_{S})}\\ \mathbf{x}&{{\mathrm{otherwise}}}\end{cases}}$$
where ProjectQ,µS
(x) is defined in Section 2.2 as the projection of x onto the target subspace VQ
![2_image_0.png](2_image_0.png)
and c(x, y) refers to the cosine similarity between two embeddings x and y.
Figure 1 provides an illustration of our proposed technique within a single Transformer layer that includes language and task adapters (as in the MADX framework).
## 3 Experimental Setup
Subspace construction. To construct language specific subspaces, we adopt the settings used by Chang et al. (2022). Text sequences of length 512 are taken from the OSCAR dataset (Ortiz Su'arez et al., 2019) and passed through XLMR (Conneau et al., 2020) to produce layer-wise contextualized embeddings. We pick 262K contextualized representations and subtract the representation mean before computing SVD. For a low-dimensional subspace, we select the greatest k singular values such that their sum of squares is greater than or equal to 90% of the total variance. (Total variance is given by the sum of the squared singular values produced.) Finally, in order to compute the languagespecific subspaces, the corresponding right singular vectors are taken as the basis.
| NER | | | | | | | | | | |
|-------------------|------|------|------|------|------|------|------|------|------|------|
| hi | vi | de | id | is | ilo | sw | my | jv | avg | |
| MAD-X Adapters | 68.3 | 66.8 | 75.9 | 49.4 | 76.2 | 74.0 | 74.8 | 52.7 | 57.3 | 66.1 |
| Random Projection | 68.9 | 69.0 | 77.5 | 53.8 | 76.8 | 79.8 | 76.5 | 57.6 | 61.2 | 69.0 |
| MCD | 68.5 | 68.1 | 77.1 | 54.7 | 76.1 | 76.9 | 75.4 | 53.6 | 59.3 | 67.7 |
| XQuAD | | | | |
|-------------------|------|------|------|------|
| hi | vi | de | avg | |
| MAD-X Adapters | 68.1 | 71.4 | 71.8 | 70.4 |
| Random Projection | 68.2 | 72.2 | 72.2 | 70.9 |
| MCD | 68.6 | 72.9 | 73.5 | 71.7 |
Table 1: NER results (F1 scores) for nine languages.
Datasets. We conduct cross-lingual transfer experiments on three tasks, Named Entity Recognition (NER), Question Answering (QA) and Natural Language Inference (NLI), where the source language is always English. For NER, we use the WikiANN dataset (Rahimi et al., 2019), and show results for nine languages Hindi, Vietnamese, German, Indonesian, Icelandic, Ilocano, Swahili, Burmese and Javanese with roughly 20K instances in the English train set and between 1K and 10K
instances in the target dev and test sets. For QA,
we use XQuAD (Artetxe et al., 2019), a multilingual extension of SQuAD (Rajpurkar et al., 2016)
and we report results for Hindi, Vietnamese and German consisting of around 87K examples in the English SQuAD train set and 1190 instances in the three target dev sets. For NLI, we use the AmericasNLI dataset (Ebrahimi et al., 2021) which is an extension of the XNLI dataset (Conneau et al.,
2018) with low-resource American languages. We report results on Quechua and Guarani, consisting of 392k instances in the English train set and 2490 and 5010 instances in the dev and test sets, respectively for each target language.
Training setup. We use transformer models from the adapter-transformers2fork of the HuggingFace transformers library (Wolf et al., 2020). We use pre-trained language adapters from AdapterHub
(Pfeiffer et al., 2020a) for our transfer experiments.
XQuAD and NLI fine-tuning experiments were conducted on a single NVIDIA A100 80 GB gpu for 15 epochs and 10 epochs, with learning rate 1e-4 and batch size 16. NER experiments were run
| NLI | | | |
|-------------------|------|------|------|
| qu | gn | avg | |
| MAD-X Adapters | 48.2 | 36.0 | 42.1 |
| Random Projection | 49.3 | 37.5 | 43.4 |
| MCD | 48.1 | 37.8 | 42.9 |
for 30 epochs on Nvidia 1080 Ti with 12 GB ram.
Further details can be found in Appendix A.
## 4 Results
NER, XQuAD and NLI results are shown in Table 1, Table 2 and Table 3 respectively. All values correspond to F1 scores averaged over 3 different seeds. We use the target language validation set to choose the best hyperparameter values for all experiments. Both MCD and random projections show consistent improvement over the MAD-X baseline numbers. With MCD, we explicitly instruct the model when to project. This removes a hyperparameter from the setup, compared to random projections, while maintaining consistent performance gains over the baseline. To further analyze MCD,
![3_image_0.png](3_image_0.png)
we consider the fraction of embeddings being projected onto the target language subspace for NER.
Table 4 shows the fraction of embeddings projected during training (averaged across all layers) for each language. For languages dissimilar to en (such as hi and id), it makes sense that the projection fractions Table 4: Projection percentages for NER.
| hi | vi | de | id | is | |
|-------------|------|------|------|------|------|
| Proj. Frac. | 0.65 | 0.57 | 0.57 | 0.63 | 0.55 |
are high since the language subspace means are closer to the source language mean (Chang et al., 2022), compared to languages more similar to en like de and is. Figure 2 shows how projection fractions vary across layers averaged across training epochs. We see high projection rates in early and final layers across languages. This correlates with these layers encoding a lot of English-specific information (Rogers et al., 2020) via training on the task-specific English data, thus triggering projections via MCD often.
## 5 Related Work
Multilingual language models like mBERT (Devlin, 2018), XLM-R (Conneau et al., 2020) possess some zero-shot cross-lingual capabilities, even without any explicit finetuning on the languages of interest (Wu and Dredze, 2019; Pires et al., 2019).
Such transfer without any finetuning could lead to degradation in performance across certain language pairs (Hu et al., 2020). Nevertheless, multilingual models are a good foundation to bootstrap and further develop cross-lingual generalization.
While there is a rapidly growing body of work on cross-lingual transfer, very few approaches utilize language-specific subspaces for this purpose.
Both Choenni and Shutova (2020) and Chang et al.
(2022) construct language-specific subspaces in multilingual models for an exploratory analysis of the model's representations. Yang et al. (2021) use projections on language specific subspaces to remove language specific information from the representations. We note such removal of language bias did not perform well on cross-lingual transfer in our experiments. Parovic et al. ´ (2022) train bilingual language adapters using both source and target language text before task adapter training.
However, this requires training language adapters using both source and target language unlabelled text, for every language pair, in addition to training task adapters. In contrast, our setup is a simple architectural extension of MAD-X, requiring no additional training once the subspaces are computed for each language. To the best of our knowledge, ours is the first work to exploit language-specific subspaces for cross-lingual transfer.
## 6 Conclusions
In this work, we present a new adapter-based crosslingual transfer technique for an apriori known set of target languages. We construct language subspaces using contextualized representations for source and target languages. Representations during task-specific training are projected onto the target subspace if they exceed a probability threshold or if they are closer to a mean source embedding.
Both schemes consistently improve zero-shot transfer for three natural language understanding tasks across many languages.
## Acknowledgements
The first author (Ujan) was supported by the Uplink Internship Program of the India Chapter of ACM
SIGKDD. The authors are thankful to the anonymous reviewers for their constructive suggestions that helped improve this submission.
## Limitations
While our proposed projection techniques often improve cross-lingual transfer, the choice of the projection layer and the projection probability in the case of random projection are hyperparameters that vary across tasks and languages. Our ongoing work involves identifying a mechanism via which we can parameterize these quantities, enabling the model to directly learn the optimal layer and probability values for projection.
## References
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama.
2019. On the cross-lingual transferability of monolingual representations. *CoRR*, abs/1910.11856.
Tyler A. Chang, Zhuowen Tu, and Benjamin K. Bergen.
2022. The geometry of multilingual language model representations. arXiv:2205.10964.
Rochelle Choenni and Ekaterina Shutova. 2020. What does it mean to be language-agnostic? probing multilingual sentence encoders for typological properties.
arXiv:2009.12862.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale.
arXiv:1911.02116.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk,
and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Jacob Devlin. 2018. Multilingual bert readme document.
Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, and Katharina Kann. 2021. Americasnli: Evaluating zero-shot natural language understanding of pretrained multilingual models in truly low-resource languages. *CoRR*, abs/2104.08726.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly.
2019. Parameter-efficient transfer learning for nlp.
arXiv:1902.00751.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson.
2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. arXiv:2003.11080.
Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, and Nicolas Patry.
2020. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Pedro Javier Ortiz Su'arez, Benoit Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC7) 2019. Cardiff, 22nd July 2019, pages 9 - 16, Mannheim. Leibniz-Institut f"ur Deutsche Sprache.
Marinela Parovic, Goran Glavaš, Ivan Vuli ´ c, and Anna ´
Korhonen. 2022. Bad-x: Bilingual adapters improve zero-shot cross-lingual transfer. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1791–1799, Seattle, United States. Association for Computational Linguistics.
Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun ´
Cho, and Iryna Gurevych. 2020a. Adapterhub: A
framework for adapting transformers. In Proceedings
of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46–54, Online. Association for Computational Linguistics.
Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Se- ´
bastian Ruder. 2020b. Mad-x: An adapter-based framework for multi-task cross-lingual transfer.
arXiv:2005.00052.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual bert? Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164, Florence, Italy. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. *arXiv e-prints*,
page arXiv:1606.05250.
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
2020. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8:842–866.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Anthony Moi Clement Delangue, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, and Sylvain Gugger. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
Ziyi Yang, Yinfei Yang, Daniel Cer, and Eric Darve.
2021. A simple and effective method to eliminate the self language bias in multilingual representations.
arXiv:2109.04727.
## A Implementation Details
We use the xlm-roberta-base model from HuggingFace Transformers (Wolf et al., 2020) pretrained on 2.5 TB of CommonCrawl data3, for all our experiments. NLI and XQuAD experiments were conducted on a single NVIDIA A100 GPU (80 GB
3https://commoncrawl.org/
Table 5: For random projection, best-performing projection layers for different languages obtained via a grid search on validation sets.
| NER | XQuAD | NLI | | | | | | | | | | | | |
|--------------------|---------|-------|----|----|----|-----|----|----|----|----|----|----|----|----|
| hi | vi | de | id | is | sw | ilo | jv | my | hi | vi | de | qu | gn | |
| Random Projections | 5 | 6 | 8 | 4 | 8 | 6 | 6 | 8 | 8 | 0 | 1 | 2 | 9 | 1 |
| MCD | 10 | 2 | 8 | 4 | 0 | 6 | 0 | 0 | 7 | 9 | 2 | 9 | 11 | 11 |
Table 6: Probability values (as determined by tuning on validation sets) for the layers in Table 5.
NER XQuAD NLI
hi vi de id is sw ilo jv my hi vi de qu gn
Random Projections 0.1 0.3 0.3 0.9 0.5 0.5 0.3 0.5 0.5 0.5 0.7 0.5 0.1 0.1
RAM) and the NER experiments ran on a single Nvidia 1080Ti GPU (12 GB RAM). We used a learning rate of 1e-4 with a batch size of 16. The hyperparameter choices for layers and probabilities for our experiments are given in Tables 5 and 6, respectively.
All datasets used are taken from HuggingFace Datasets (Lhoest et al., 2020). For evaluating models, we use the HuggingFace Evaluate library4as well as the seqeval python package5
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
No potential risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
3, appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3, appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
di-liello-etal-2023-context | Context-Aware Transformer Pre-Training for Answer Sentence Selection | https://aclanthology.org/2023.acl-short.40 | Answer Sentence Selection (AS2) is a core component for building an accurate Question Answering pipeline. AS2 models rank a set of candidate sentences based on how likely they answer a given question. The state of the art in AS2 exploits pre-trained transformers by transferring them on large annotated datasets, while using local contextual information around the candidate sentence. In this paper, we propose three pre-training objectives designed to mimic the downstream fine-tuning task of contextual AS2. This allows for specializing LMs when fine-tuning for contextual AS2. Our experiments on three public and two large-scale industrial datasets show that our pre-training approaches (applied to RoBERTa and ELECTRA) can improve baseline contextual AS2 accuracy by up to 8{\%} on some datasets. | # Context-Aware Transformer Pre-Training For Answer Sentence Selection
Luca Di Liello1∗, Siddhant Garg2**, Alessandro Moschitti**2 1University of Trento , 2Amazon Alexa AI
[email protected]
{sidgarg,amosch}@amazon.com
## Abstract
Answer Sentence Selection (AS2) is a core component for building an accurate Question Answering pipeline. AS2 models rank a set of candidate sentences based on how likely they answer a given question. The state of the art in AS2 exploits pre-trained transformers by transferring them on large annotated datasets, while using local contextual information around the candidate sentence. In this paper, we propose three pre-training objectives designed to mimic the downstream fine-tuning task of contextual AS2. This allows for specializing LMs when fine-tuning for contextual AS2. Our experiments on three public and two large-scale industrial datasets show that our pre-training approaches (applied to RoBERTa and ELECTRA)
can improve baseline contextual AS2 accuracy by up to 8% on some datasets.
## 1 Introduction
Answer Sentence Selection (AS2) is a fundamental task in QA, which consists of re-ranking a set of answer sentence candidates according to how correctly they answer a given question. From a practical standpoint, AS2-based QA systems can operate under much lower latency constraints than corresponding Machine Reading (MR) based QA
systems. Nowadays, latency is of particular importance because sources of information such as Knowledge Bases or Web Indexes may contain million or billion of documents. In AS2, latency can be minimized because systems process several sentences/documents in parallel, while MR systems parse the entire document/passage in a sliding window fashion before finding the answer span (Garg and Moschitti, 2021; Gabburo et al., 2022).
Modern AS2 systems (Garg et al., 2020; Laskar et al., 2020) use transformers to cross-encode question and answer candidates together. Recently, Lauriola and Moschitti (2021) proved that performing answer ranking using only the candidate sentence
∗Work done as an intern at Amazon Alexa AI
458 is sub-optimal, for e.g., the answer sentence may contain unresolved coreference with entities, or the sentence may lack specific context for answering the question. Several works (Ghosh et al., 2016; Tan et al., 2018; Han et al., 2021) have explored performing AS2 using context around answer candidates (for example, adjacent sentences) towards improving performance. Local contextual information, i.e., the previous and next sentences of the answer candidates, can help coreference disambiguation, and provide additional knowledge to the model. This helps to rank the best answer at the top, with minimal increase in compute requirements and latency.
Previous research works (Lauriola and Moschitti, 2021; Han et al., 2021) have directly used existing pre-trained transformer encoders for contextual AS2, by fine-tuning them on an input comprising of multiple sentences with different roles, i.e., the question, answer candidate, and context
(previous and following sentences around the candidate). This structured input creates practical challenges during fine-tuning, as standard pre-training approaches do not align well with the downstream contextual AS2 task, e.g., the language model does not know the role of each of these multiple sentences in the input. In other words, the extended sentence-level embeddings have to be learnt directly during fine-tuning, causing underperformance empirically. This effect is amplified when the downstream data for fine-tuning is small, indicating models struggling to exploit the context.
In this paper, we tackle the aforementioned issues by designing three pre-training objectives that structurally align with the final contextual AS2 task, and can help improve the performance of language models when fine-tuned for AS2. Our pre-training objectives exploit information in the structure of paragraphs and documents to pre-train the context slots in the transformer text input. We evaluate our strategies on two popular pre-trained transformers over five datasets. The results show that our approaches using structural pre-training can effectively adapt transformers to process contextualized input, improving accuracy by up to 8% when compared to the baselines on some datasets.
## 2 Related Work
Answer Sentence Selection TANDA (Garg et al.,
2020) established the SOTA for AS2 using a large dataset (ASNQ) for transfer learning. Other approaches for AS2 include: separate encoders for question and answers (Bonadiman and Moschitti, 2020), and compare-aggregate and clustering to improve answer relevance ranking (Yoon et al., 2019).
Contextual AS2 Ghosh et al. (2016) use LSTMs for answers and topics, improving accuracy for next sentence selection. Tan et al. (2018) use GRUs to model answers and local context, improving performance on two AS2 datasets. Lauriola and Moschitti (2021) propose a transformer encoder that uses context to better disambiguate between answer candidates. Han et al. (2021) use unsupervised similarity matching techniques to extract relevant context for answer candidates from documents.
Pre-training Objectives Pre-training sentencelevel objectives such as NSP (Devlin et al., 2019)
and SOP (Lan et al., 2020) have been widely explored for transformers to improve accuracy for downstream classification tasks. However, the majority of these objectives are agnostic of the final tasks. End task-aware pre-training has been studied for summarization (Rothe et al., 2021), dialogue
(Li et al., 2020), passage retrieval (Gao and Callan, 2021), MR (Ram et al., 2021) and multi-task learning (Dery et al., 2021). Lee et al. (2019), Chang et al. (2020) and Sachan et al. (2021) use the Inverse Cloze task to improve retrieval performance for bi-encoders, by exploiting paragraph structure via self-supervised objectives. For AS2, recently Di Liello et al. (2022a) proposed paragraph-aware pre-training for joint classification of multiple candidates. Di Liello et al. (2022b) propose a sentencelevel pre-training paradigm for AS2 by exploiting document and paragraph structure. However, these works do not consider the structure of the downstream task (specifically contextual AS2). To the best of our knowledge, ours is the first work to study transformer pre-training strategies for AS2 augmented with context using cross-encoders.
## 3 Contextual As2
AS2 Given a question q and a set of answer candidates S = {s1*, . . . , s*n}, the goal is to find the best sk that answers q. This is typically done by learning a binary classifier C of answer correctness by independently feeding the pairs (q, si), i ∈
{1*, . . ., n*} as input to C, and making C predict whether si correctly answers q or not. At inference time, we find the best answer for q by selecting the answer candidate sk which scores the highest probability of correctness k = arg maxi C(*q, s*i).
Contextual AS2 Contextual models for AS2 exploit additional context to improve the final accuracy. This has been shown to be effective (Lauriola and Moschitti, 2021) in terms of overcoming coreference disambiguation and lack of enough information to rank the best answer at the top. Different from the above case, contextual AS2 models receive as input a tuple (q, si, ci) where ciis the additional context. ciis usually the sentences immediately before and after the answer candidate.
## 4 Context-Aware Pre-Training Objectives
We design a transformer pre-training task that aligns well with fine-tuning contextual AS2 models, both *structurally* and *semantically*. We exploit the division of large corpora in documents and the subdivision of documents in paragraphs as a source of supervision. We provide triplets of text spans
(*a, b, c*) as model inputs when pre-training, which emulates the structure of (q, si, ci) for contextual AS2 models, where a, b and c play the analogous role of the question, the candidate sentence (that needs to be classified), and the context (which helps in predicting (*a, b*) correctness), respectively. Formally, given a document D from the pre-training corpus, the task is to infer if a and b are two sentences extracted from the same paragraph P ∈ D.
Following Di Liello et al. (2022b), we term this task: "Sentences in Same Paragraph (SSP)".
Intuition for SSP Consider an example of a Wikipedia paragraph composed of three sentences:
s1: Lovato was brought up in Dallas, Texas; she began playing the piano at age seven and guitar at ten, when she began dancing and acting classes.
s2: In 2002, Lovato began her acting career on the children's television series Barney & Friends, portraying the role of Angela.
s3: She appeared on Prison Break in 2006 and on Just Jordan the following year.
Given a question of the type "What are the acting roles of X", a standard LM can easily reason to select answers of the type "X acted/played in Y", by matching the subject argument of the question with the object argument of the answer, for the same predicate acting/playing. However, the same LM would have a harder time selecting answers of the type "X appeared in Y " because this requires learning the relation between the entire predicate argument structure of acting vs. the one of appearing. A LM pre-trained using the SSP task can learn these implications, as it reasons about concepts from s3, e.g., "appearing in Prison Break and Just Jordan" (which are TV series), being related to concepts from s2, e.g., "having an acting career" as the sentences belong to the same paragraph.
The semantics learned by connecting sentences in the same paragraph transfer well downstream, as the model can re-use previously learned relations between entities and concepts, and apply them between question and answer candidates. Relations in one sentence may be used to formulate questions that can be answered in the other sentence, which is most likely to happen for sentences in the same paragraph since every paragraph describes the same general topic from a different perspective.
We design three ways of choosing the appropriate contextual information c for SSP. We present details on how we sample spans a, b and c from the pre-training documents below.
Static Document-level Context (SDC) Here, we choose the context c to be the first paragraph P0 of D = {P0*, .., P*n} from which b is extracted.
This is based on the intuition that the first paragraph acts as a summary of a document's content
(Chang et al., 2020): this strong context can help the model at identifying if b is extracted from the same paragraph as a. We call this static documentlevel context since the contextual information c is constant for any b extracted from the same document D. Specifically, the positive examples are created by sampling a and b from a single random paragraph Pi ∈ *D, i >* 0. For the previously chosen a, we create hard negatives by randomly sampling a sentence b from different paragraphs Pj ∈ D, j ̸= i ∧ j > 0. We set c = P0 for this negative example as well since b still belongs to D. We create easy negatives for a chosen a by sampling b from a random paragraph P′
i in another document D′ ̸= D. In this case, c is chosen as the first paragraph P′0 of D′since the context in the downstream AS2 task is associated with the answer candidate, and not with the question.
## Dynamic Paragraph-Level Context (Dpc) We
dynamically select the context c to be the paragraph from which the sentence b is extracted. We create positive examples by sampling a and b from a single random paragraph Pi ∈ D, and we set the context as the remaining sentences in Pi, i.e.,
c = Pi \ {*a, b*}. Note that leaving a and b in Pi would make the task trivial. For the previously chosen a, we create hard negatives by sampling b from another random paragraph Pj ∈ *D, j* ̸= i, and setting c = Pj \ {b}. We create easy negatives for a chosen a by sampling b from a random P′
i in another document D′ ̸= D, and setting c = P′
i\ {b}.
## Dynamic Sentence-Level Local Context (Dslc)
We choose c to be the local context around the sentence b, i.e, the concatenation of the previous and next sentence around b in P ∈ D. To deal with corner cases, we require at least one of the previous or next sentences of b to exist (e.g., the next sentence may not exist if b is the last sentence of the paragraph P). We term this DSLC as the contextual information c is specified at sentence-level and changes correspondingly to every sentence b extracted from D. We create positive pairs similar to SDC and DPC by sampling a and b from the same paragraph Pi ∈ D, with c being the local context around b in Pi (and a /∈ c). We automatically discard paragraphs that are not long enough to ensure the creation of a positive example. We generate hard negatives by sampling b from another Pj ∈ *D, j* ̸= i, while for easy negatives, we sample b from a P′
i ∈ D′, D′ ̸= D (in both cases c is set as the local context around b).
## 5 Datasets
Pre-Training To perform a fair comparison and avoid any improvement stemming from additional pre-training data, we use the same corpora as RoBERTa (Liu et al., 2019). This includes the English Wikipedia, the BookCorpus (Zhu et al.,
2015), OpenWebText (Gokaslan and Cohen, 2019)
and CC-News 1. We pre-process each dataset by filtering away: (i) sentences shorter than 20 characters, (ii) paragraphs shorter than 60 characters and (iii) documents shorter than 200 characters.
We split paragraphs into sequences of sentences using the NLTK tokenizer (Loper and Bird, 2002)
1STORIES is no longer publicly available, hence omitted
| Model | Context | ASNQ | WikiQA | NewsAS2 | IQAD Bench 1 | IQAD Bench 2 | | | | | |
|----------------------------------|-----------|-----------------------|-----------------------|-----------------------|-----------------------|-----------------------|-------------|-------------|----------|-------|-------|
| MAP | P@1 | MAP | P@1 | MAP | P@1 | MAP | P@1 | MAP | P@1 | | |
| ELECTRA-Base | ✗ | 69.3 (0.0) | 65.0 (0.2) | 85.7 (0.9) | 78.5 (1.6) | 81.3 (0.2) | 75.6 (0.2) | Baseline | Baseline | | |
| ELECTRA-Base ♣ | ✓ | 72.3 (0.6) | 68.1 (0.8) | 83.1 (1.3) | 73.8 (2.1) | 82.0 (0.4) | 76.0 (0.5) | -0.6% | -1.0% | -0.4% | -0.9% |
| (Ours) ELECTRA-Base + SSP (SDC) | ✓ | 74.7 (0.5) | 69.6 (0.3) | 88.7 (0.1) 82.9 (0.2) | 82.7 (0.2) | 77.0 (0.4) | +1.2% +0.6% | +0.9% +1.4% | | | |
| (Ours) ELECTRA-Base + SSP (DPC) | ✓ | 74.4 (0.2) 70.5 (0.2) | 88.0 (0.6) | 81.3 (0.6) | 82.7 (0.5) 77.3 (0.7) | +0.4% | -0.6% | +0.4% | +0.1% | | |
| (Ours) ELECTRA-Base + SSP (DSLC) | ✓ | 74.3 (0.3) | 70.0 (0.8) | 87.0 (0.9) | 79.7 (1.4) | 82.8 (0.4) 77.3 (0.5) | +1.0% +0.6% | +0.2% 0.0% | | | |
| (Ours) ELECTRA-Base + SSP (All) | ✓ | 73.8 (0.4) | 68.8 (0.4) | 87.5 (0.5) | 81.5 (0.7) | 82.7 (0.2) | 77.2 (0.3) | +0.1% | -0.4% | +0.1% | -0.1% |
| RoBERTa-Base | ✗ | 68.2 (0.5) | 63.5 (0.5) | 85.1 (1.9) | 77.2 (3.1) | 81.7 (0.1) | 76.2 (0.2) | +0.6% | +0.1% | +0.7% | +1.3% |
| RoBERTa-Base ♣ | ✓ | 71.6 (0.6) | 67.6 (0.6) | 84.4 (1.5) | 77.0 (2.1) | 82.4 (0.2) | 76.6 (0.7) | +0.4% | 0.0% | +1.1% | +1.7% |
| (Ours) RoBERTa-Base + SSP (SDC) | ✓ | 73.1 (0.5) | 68.7 (0.8) | 87.8 (0.6) | 81.8 (0.9) | 82.8 (0.1) | 76.9 (0.2) | +1.7% +3.0% | +1.0% | +1.7% | |
| (Ours) RoBERTa-Base + SSP (DPC) | ✓ | 73.2 (0.4) 69.2 (0.5) | 89.9 (0.2) 85.2 (0.4) | 82.3 (0.1) | 76.0 (0.1) | +0.4% | +1.2% | +1.2% +2.7% | | | |
| (Ours) RoBERTa-Base + SSP (DSLC) | ✓ | 72.9 (0.4) | 69.0 (0.3) | 87.8 (0.9) | 81.6 (1.3) | 82.6 (0.2) | 77.0 (0.2) | +0.6% | +1.5% | +1.0% | +1.4% |
| (Ours) RoBERTa-Base + SSP (All) | ✓ | 72.9 (0.6) | 68.2 (0.8) | 88.2 (0.9) | 82.4 (1.7) | 83.0 (0.2) 77.3 (0.5) | +1.2% +2.4% | +1.4% +2.2% | | | |
and create the SSP pre-training datasets following Section 4. Refer Appendix A.1 for more details.
Contextual AS2 We evaluate our pre-trained models on three public and two industrial datasets for contextual AS2. For all datasets, we use the standard "clean" setting, by removing questions in the dev. and test sets which have only positive or only negative answer candidates, following standard practice in AS2 (Garg et al., 2020). We measure performance using Precision-at-1 (P@1) and Mean Average Precision (MAP) metrics.
- **ASNQ** is a large scale AS2 dataset (Garg et al.,
2020) derived from NQ (Kwiatkowski et al., 2019).
The questions are user queries from Google search, and answers are extracted from Wikipedia.
- **WikiQA** is a small dataset (Yang et al., 2015)
for AS2 with questions extracted from Bing search engine and answer candidates retrieved from the first paragraph of Wikipedia articles.
- **IQAD** is a large scale industrial dataset containing de-identified questions asked by users to Alexa virtual assistant. IQAD contains ∼220k questions where answers are retrieved from a large web index (∼1B web pages) using Elasticsearch. We use two different evaluation benchmarks for IQAD: (i)
IQAD Bench 1, which contains 2.2k questions with
∼15 answer candidates annotated for correctness by crowd workers and (ii) *IQAD Bench 2*, which contains 2k questions with ∼15 answer candidates annotated with explicit fact verification guidelines for correctness by crowd workers. (Our manual analysis indicates a higher annotation quality for QA pairs in Bench 2 than Bench 1). Results on IQAD are presented relative to a baseline due to the data being internal.
- **NewsAS2** is a large AS2 dataset created from NewsQA (Trischler et al., 2017), a MR dataset, following the procedure of Garg et al. for ASNQ.
The dataset contains ∼70K human generated questions with answers extracted from *CNN/Daily Mail*.
More details about the procedure to create NewsQA are given in Appendix A.2.
## 6 Experiments
Continuous Pre-Training We use RoBERTaBase and ELECTRA-Base public checkpoints (pretraining from scratch would have required large amounts of computational resources), and perform continuous pre-training using our objectives for
∼10% of the compute used by the original models.
Complete details are given in Appendix C. We experiment with each of our pre-training objectives independently, as well as combining all of them.
Fine-Tuning We fine-tune each continuously pretrained model on all the AS2 datasets. As baselines, we consider (i) standard pairwise-finetuned AS2 models, using only the question and the answer candidate, and (ii) contextual fine-tuned AS2 models from (Lauriola and Moschitti, 2021), which use the question, answer candidate and local context.
## 7 Results
Table 1 summarizes the results of our experiments averaged across 5 runs to show also standard deviation and statistically significant improvements over baselines.
Public datasets On ASNQ, our pre-trained models get 3.8 - 5.5% improvement in P@1 over the baseline using only the question and answer. Our models also outperform the stronger contextual AS2 baselines (1.6% with RoBERTa and 2.4% with ELECTRA), indicating that our task-aware pre-training can help improve the downstream finetuning performance. On NewsAS2, we observe a similar trend, where all our models (except one)
outperform both the standard and contextual baselines. On WikiQA, a smaller dataset, the contextual baselines under-performs the non-contextual baselines, highlighting that with few samples the model struggles to adapt and reason over three text spans. For this reason, our pre-training approaches provide the maximum accuracy improvement on WikiQA (up to 8 - 9.1% over the non-contextual and contextual baselines).
Industrial datasets On IQAD, we observe that the contextual baseline performs on par or lower than the non-contextual baseline, indicating that off-the-shelf transformers cannot effectively exploit the context available for this dataset. The answer candidates and context for IQAD are extracted from millions of web documents. Thus, learning from the context in IQAD is a harder task than learning from it on ASNQ, where the context belongs to a single Wikipedia document. Our pre-trained models help to process the diverse and possibly noisy context of IQAD, and produce a significant improvement in P@1 over the contextual baseline.
Combining the 3 SSP objectives We observe that combining all the objectives together does not always outperform the individual objectives, which is probably due to the misalignment between the different approaches for sampling context in our pre-training strategies. Notice that we used a single classification head for all the three tasks, indirectly asking the model also to recognize the task to be solved among SDC, DPC or DSLC. Experiments with separate classification heads (one for each task) led to worse results in early experiments.
Choosing the optimal SSP objective Our finetuning datasets have significantly different structures: ASNQ, NewsAS2 and WikiQA have answer candidates sourced from a single document
(Wikipedia for ASNQ and WikiQA and CNN Daily Mail articles for NewsQA), while IQAD has answer candidates extracted from multiple documents.
This also results in the context for the former being more homogeneous (context for all candidates for a question is extracted from the same document),
while for the latter the context is more heterogeneous (extracted from multiple documents for different answer candidates).
Our DPC and DSLC pre-training approaches are well aligned in terms of the context that is used to help the SSP predictions. The former uses the remainder of the paragraph P as context (after removing a and b), while the latter uses the sentence previous and next to b in P. We observe empirically that the contexts for DPC and DSLC often overlap partially, and are sometimes even identical (considering average length of paragraphs in the pre-training corpora is 4 sentences). This explains why models pre-trained using both these approaches perform comparably in Table 1 (with only a very small gap in P@1 performance).
On IQAD, we observe that the SDC approach of providing context for SSP outperforms DPC
and DSLC. In SDC, the context c can potentially be very different from a and b (as it corresponds to the first paragraph of the document), and this can aid exploiting information and effectively ranking answer candidates from multiple documents
(possibly from different domains) like for IQAD.
For these reasons, we recommend using DPC and DSLC when answer candidates are extracted from the same document, and SDC when candidates are extracted from multiple sources.
## 8 Conclusion And Future Work
In this paper, we have proposed three pre-training strategies for transformers, which (i) are aware of the downstream task of contextual AS2, and (ii) use the document and paragraph structure information to define effective objectives. Our experiments on three public and two industrial datasets using two transformer models show that our pre-training strategies can provide significant improvement over the contextual AS2 models.
In addition to local context around answer candidates (the previous and successive sentences), other contextual signals can also be incorporated to improve the relevance ranking of answer candidates.
Meta-information like document title, abstract/firstparagraph, domain name, etc. corresponding to the document containing the answer candidates can help answer ranking. These signals differ from the previously mentioned local answer context as they provide "global" contextual information pertaining to the documents for AS2. Our SDC objective, which uses the first paragraph of the document for the context input slot, captures global information pertaining to the document, and we hypothesise that this may improve downstream performance using other global contextual signals in addition to local answer context.
## Limitations
Our proposed pre-training approaches require access to large GPU resources (pre-training is performed on 350M training samples for large language models containing 100's of millions of parameters). Even using 10% of the original pretraining compute, the additional pre-training takes a long time duration to finish (several days even on 8 NVIDIA A100 GPUs). This highlights that this procedure cannot easily be re-done with newer data being made available in an online setting. However the benefit of our approach is that once the pre-training is complete, our released model checkpoints can be directly fine-tuned (even on smaller target datasets) for the downstream contextual AS2 task. For the experiments in this paper, we only consider datasets from the English language, however we conjecture that our techniques should work similarly for other languages with limited morphology. Finally, we believe that the three proposed objectives could be better combined in a multi-task training scenario where the model has to jointly predict the task and the label. At the moment, we only tried using different classification heads for this but the results were worse.
## References
Daniele Bonadiman and Alessandro Moschitti. 2020.
A study on efficiency, accuracy and document structure for answer sentence selection. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5211–5222, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval.
Lucio M. Dery, Paul Michel, Ameet Talwalkar, and Graham Neubig. 2021. Should we be pre-training? an argument for end-task aware training as an alternative.
Nicki Skafte Detlefsen, Jiri Borovec, Justus Schock, Ananya Harsh Jha, Teddy Koker, Luca Di Liello, Daniel Stancl, Changsheng Quan, Maxim Grechkin, and William Falcon. 2022. Torchmetrics - measuring reproducibility in pytorch. Journal of Open Source Software, 7(70):4101.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Luca Di Liello, Siddhant Garg, Luca Soldaini, and Alessandro Moschitti. 2022a. Paragraph-based transformer pre-training for multi-sentence inference. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics, Seattle, Washington. Association for Computational Linguistics.
Luca Di Liello, Siddhant Garg, Luca Soldaini, and Alessandro Moschitti. 2022b. Pre-training transformer models with sentence-level objectives for answer sentence selection. In *Proceedings of the 2022* Conference on Empirical Methods in Natural Language Processing, pages 11806–11816, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
William Falcon et al. 2019. Pytorch lightning. *GitHub.*
Note: https://github.com/PyTorchLightning/pytorchlightning, 3(6).
Matteo Gabburo, Rik Koncel-Kedziorski, Siddhant Garg, Luca Soldaini, and Alessandro Moschitti. 2022. Knowledge transfer from answer ranking to answer generation. In *Proceedings of the 2022 Conference* on Empirical Methods in Natural Language Processing, pages 9481–9495, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Luyu Gao and Jamie Callan. 2021. Unsupervised corpus aware language model pre-training for dense passage retrieval.
Siddhant Garg and Alessandro Moschitti. 2021. Will this question be answered? question filtering via answer model distillation for efficient question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7329–7346, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Siddhant Garg, Thuy Vu, and Alessandro Moschitti.
2020. Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34(05):7780–7788.
Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and Larry Heck. 2016. Contextual lstm
(clstm) models for large scale nlp tasks.
Aaron Gokaslan and Vanya Cohen. 2019. Openwebtext corpus. http://Skylion007.github.io/
OpenWebTextCorpus.
Rujun Han, Luca Soldaini, and Alessandro Moschitti.
2021. Modeling context in answer sentence selection systems on a latency budget.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. *Transactions of the Association of Computational Linguistics*.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. Albert: A lite bert for self-supervised learning of language representations.
Md Tahmid Rahman Laskar, Jimmy Xiangji Huang, and Enamul Hoque. 2020. Contextualized embeddings based transformer encoder for sentence similarity modeling in answer selection task. In *Proceedings* of the Twelfth Language Resources and Evaluation Conference, pages 5505–5514, Marseille, France. European Language Resources Association.
Ivano Lauriola and Alessandro Moschitti. 2021. Answer sentence selection using local and global context in transformer models. In *ECIR 2021*.
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova.
2019. Latent retrieval for weakly supervised open domain question answering.
Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021.
Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Junlong Li, Zhuosheng Zhang, Hai Zhao, Xi Zhou, and Xiang Zhou. 2020. Task-specific objectives of pretrained language models for dialogue adaptation.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach.
Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. *CoRR*, cs.CL/0205028.
Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, and Omer Levy. 2021. Few-shot question answering by pretraining span selection. In Proceedings of the 59th Annual Meeting of the Association for
Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 3066–3079, Online.
Association for Computational Linguistics.
Sascha Rothe, Joshua Maynez, and Shashi Narayan.
2021. A thorough evaluation of task-specific pretraining for summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 140–145, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Devendra Singh Sachan, Mostofa Patwary, Mohammad Shoeybi, Neel Kant, Wei Ping, William L Hamilton, and Bryan Catanzaro. 2021. End-to-end training of neural retrievers for open-domain question answering.
Chuanqi Tan, Furu Wei, Qingyu Zhou, Nan Yang, Bowen Du, Weifeng Lv, and Ming Zhou. 2018.
Context-aware answer sentence selection with hierarchical gated recurrent neural networks. *IEEE/ACM*
Transactions on Audio, Speech, and Language Processing, 26(3):540–549.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yi Yang, Scott Wen-tau Yih, and Chris Meek. 2015.
Wikiqa: A challenge dataset for open-domain question answering. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language* Processing. ACL - Association for Computational Linguistics.
Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, and Kyomin Jung. 2019. A compareaggregate model with latent clustering for answer selection.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *The IEEE International Conference on Computer Vision (ICCV)*.
## Appendix A Additional Dataset Details A.1 Pre-Training Datasets
For each SSP objective, we randomly sample up to 2 hard negatives, and additionally sample easier negatives until the total number of negatives is 4.
Instead of reasoning in terms of sentences, we design our SSP objectives to create a and b as small spans composed of 1 or more contiguous sentences.
For a, we keep the length equal to 1 sentence because it emulates the question, which typically is just a single sentence. For b, we randomly assign a length between 1 and 3 sentences. The length of the context c cannot be decided a-priori because it depends on the specific pre-training objective and the length of the paragraph. After the pre-processing, all the resulting continuous pre-training datasets contain around 350M training examples each.
## A.2 Newsqa Dataset
We created NewsAS2 by splitting each document in NewsQA into individual sentences with the NLTK
tokenizer (Loper and Bird, 2002). Then, for each sentence, we assign a positive label if it contains at least one of the annotated answers for that document, and assign a negative label otherwise. The resulting dataset has 1.69% positives sentences per query in the training set, 1.66% in the dev set and 1.68% in the test set.
## B Frameworks & Infrastructure
Our framework is based on (i) HuggingFace Transformers (Wolf et al., 2020) for model architecture,
(ii) HuggingFace Datasets (Lhoest et al., 2021)
for data processing, (iii) PyTorch-Lightning for distributed training (Falcon et al., 2019) and (iv)
TorchMetrics for AS2 evaluation metrics (Detlefsen et al., 2022). We performed our pre-training experiments for every model on 8 NVIDIA A100 GPUs with 40GB of memory each, using *fp16* for tensor core acceleration.
## C Details Of Continuous Pre-Training
We experiment with RoBERTa-Base and ELECTRA-Base public checkpoints. RoBERTaBase contains 124M parameters while ELECTRABase contains 33M parameters in the generator and 108M in the discriminator.
We do continuous pre-training starting from the aforementioned models for 400K steps with a batch
Dataset **Train Dev Test**
#Q #QA #Q #QA #Q #QA
ASNQ 57242 20377568 1336 463914 1336 466148
WikiQA 2118 20360 122 1126 237 2341
IQAD 221334 3894129 2434 43369 2252 38587
2088 33498
NewsAS2 71561 1840533 2102 51844 2083 51472
Table 2: Number or unique questions and questionanswer pairs in the fine-tuning datasets. IQAD Bench 1
and Bench 2 sizes are mentioned in the Test set column
corresponding to IQAD.
size of 4096 examples and a triangular learning rate with a peak value of 10−4and 10K steps of warmup. In order to save resources, we found it beneficial to reduce the maximum sequence length to 128 tokens. In this setting, our models see ∼210B
additional tokens each, which is 10% of what is used in the original RoBERTa pre-training. Our objectives are more efficient because the attention computational complexity grows quadratically with the sequence length, which in our case is 4 times smaller than the original RoBERTa model.
We use cross-entropy as the loss function for all our pre-training and fine-tuning experiments.
Specifically, for RoBERTa pre-training we add the MLM loss to our proposed binary classification losses using equal weights (1.0) for both the loss terms. For ELECTRA pre-training, we sum three loss terms: MLM loss with a weight of 1.0, the Token Detection loss with a weight of 50.0, and our proposed binary classification losses with a weight of 1.0.
During continuous pre-training, we feed the text tuples (*a, b, c*) (as described in Section 4)
as input to the model in the following format:
'[CLS]a[SEP]b[SEP]c[SEP]'. To provide independent sentence/segment ids to each of the inputs a, b and c, we initialize the sentence embeddings layers of RoBERTa and ELECTRA from scratch, and extend them to an input size of 3.
The pre-training of every model obtained by combining ELECTRA and RoBERTa architectures with our contextual pre-training objectives took around 3.5 days each on the machine configuration described in Appendix B. The dataset preparation required 10 hours over 64 CPU cores.
## D Details Of Fine-Tuning
The most common paradigm for AS2 fine-tuning is to consider publicly available pre-trained transformer checkpoints (pre-trained on large amounts of raw data) and fine-tune them on the AS2 datasets.
| Model | Hyper-parameter | ASNQ | WikiQA | NewsAS2 | IQAD |
|------------|-------------------|--------|----------|-----------|--------|
| Batch size | 2048 | 32 | 256 | 256 | |
| Peak LR | 1e-05 | 5e-06 | 5e-06 | 1e-05 | |
| RoBERTa | Warmup steps | 10K | 1K | 5K | 5K |
| Epochs | 6 | 30 | 8 | 10 | |
| Batch size | 1024 | 128 | 128 | 256 | |
| Peak LR | 1e-05 | 2e-05 | 1e-05 | 2e-05 | |
| ELECTRA | Warmup steps | 10K | 1K | 5K | 5K |
| Epochs | 6 | 30 | 8 | 10 | |
ELECTRA
Batch size 1024 128 128 256
Peak LR 1e-05 2e-05 1e-05 2e-05
Warmup steps 10K 1K 5K 5K
Epochs 6 30 8 10
Table 3: Hyper-parameters used to fine-tune RoBERTa
and ELECTRA on the AS2 datasets. The best hyperparameters have been chosen based on the MAP results
on the validation set.
Using our proposed pre-training objectives, we are proposing stronger model checkpoints which can improve over the standard public checkpoints, and can be used as the initialization for downstream fine-tuning for contextual AS2.
To fine-tune our models on the downstream AS2 datasets, we found it is beneficial to use a very large batch size for ASNQ and a smaller one for IQAD, NewsAS2 and WikiQA. Moreover, for every experiment we used a triangular learning rate scheduler and we did early stopping on the development set if the MAP did not improve for 5 times in a row. We fixed the maximum sequence length to 256 tokens in every run, and we repeated each experiment 5 times with different initial random seeds. We did not use weight decay but we clipped gradients larger than 1.0 in absolute value. More specifically, for the learning rate we tried all values in {5 ∗ 10−6, 10−5, 2 ∗ 10−5} for RoBERTa and in {10−5, 2 ∗ 10−5, 5 ∗ 10−5} for ELECTRA. Regarding the batch size, we tried all values in {512, 1024, 2048, 4096} for ASNQ, in {64, 128, 256, 512} for IQAD and NewsAS2 and in {16, 32, 64, 128} for WikiQA. More details about the final setting are given in Table 3.
For the pair-wise models, we format inputs as '[CLS]q[SEP]si[SEP]', while for contextual models we build inputs of the form
'[CLS]q[SEP]si[SEP]ci[SEP]'. We do not use extended sentence/segment ids for the non-contextual baselines and retain the original model design: (i)
disabled segment ids for RoBERTa and (ii) only using 2 different sentence/segment ids for ELECTRA.
For the fine-tuning of our continuously pre-trained models as well as the contextual baseline, we use three different sentence ids corresponding to q, s and c for both RoBERTa and ELECTRA. Finally, differently from pre-training, in fine-tuning we always provide the previous and the next sentence as context for a given candidate.
The contextual fine-tuning of every models on ASNQ required 6 hours per run on the machine configuration described in Appendix B. For other fine-tuning datasets, we used a single GPU for every experiment, and runs took less than 2 hours.
## E Qualitative Examples
In Table 4 we show a comparison of the ranking produced by our models and that by the contextual baselines on some questions selected from the ASNQ test set.
ELECTRA
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![8_image_2.png](8_image_2.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
chen-etal-2023-toward | Toward Expanding the Scope of Radiology Report Summarization to Multiple Anatomies and Modalities | https://aclanthology.org/2023.acl-short.41 | Radiology report summarization (RRS) is a growing area of research. Given the Findings section of a radiology report, the goal is to generate a summary (called an Impression section) that highlights the key observations and conclusions of the radiology study. However, RRS currently faces essential limitations. First, many prior studies conduct experiments on private datasets, preventing reproduction of results and fair comparisons across different systems and solutions. Second, most prior approaches are evaluated solely on chest X-rays. To address these limitations, we propose a dataset (MIMIC-RRS) involving three new modalities and seven new anatomies based on the MIMIC-III and MIMIC-CXR datasets. We then conduct extensive experiments to evaluate the performance of models both within and across modality-anatomy pairs in MIMIC-RRS. In addition, we evaluate their clinical efficacy via RadGraph, a factual correctness metric. | # Toward Expanding The Scope Of Radiology Report Summarization To Multiple Anatomies And Modalities
Zhihong Chen2,3∗, Maya Varma1∗**, Xiang Wan**2,3, Curtis P. Langlotz1, **Jean-Benoit Delbrouck**1∗
1Stanford University 2The Chinese University of Hong Kong, Shenzhen 3Shenzhen Research Institute of Big Data [email protected] [email protected]
{mvarma2,langlotz,jbdel}@stanford.edu
## Abstract
Radiology report summarization (RRS) is a growing area of research. Given the Findings section of a radiology report, the goal is to generate a summary (called an Impression section) that highlights the key observations and conclusions of the radiology study. However, RRS currently faces essential limitations. First, many prior studies conduct experiments on private datasets, preventing the reproduction of results and fair comparisons across different systems and solutions. Second, most prior approaches are evaluated solely on chest Xrays. To address these limitations, we propose a dataset (MIMIC-RRS) involving three new modalities and seven new anatomies based on the MIMIC-III and MIMIC-CXR datasets. We then conduct extensive experiments to evaluate the performance of models both within and across modality-anatomy pairs in MIMIC-RRS.
In addition, we evaluate their clinical efficacy via RadGraph, a factual correctness metric.
## 1 Introduction
A *radiology report* is a document that provides information about the results of a radiology study. It usually includes a Findings section with key observations from the study and an Impression section with the radiologist's overall conclusions. The latter is the most critical part of the report and is typically based on both the findings and the patient's condition. It can be helpful to automate the process of generating the impression section because it can be time-consuming and prone to errors when done manually (Bhargavan et al., 2009; Alexander et al., 2022). Recently, substantial progress has been made towards research on automated radiology report summarization (RRS) (Zhang et al.,
2020; Ben Abacha et al., 2021; Hu et al., 2022).
However, the field of RRS faces several key limitations. First, the experimental results of many
*Equal Contribution.
prior studies (Zhang et al., 2018, 2020) are reported on private datasets, making it difficult to replicate results or compare approaches. Second, existing studies are mainly limited to a single modality (*i.e.*,
X-ray) and a single anatomy (*i.e.*, chest) (Zhang et al., 2020; Ben Abacha et al., 2021; Hu et al.,
2021). In some cases, researchers omit to disclose the modality and anatomy of the radiology reports used for their experiments (Karn et al., 2022). Finally, recent models (Karn et al., 2022; Hu et al.,
2022) present an increased complexity in architecture that offers only marginal improvements on the existing evaluation metrics for summarization.
This further makes the replication of studies more difficult.
To address the aforementioned limitations, we construct a brand-new open-source dataset (named MIMIC-RRS) for radiology report summarization involving three modalities (X-ray, MRI, and CT)
and seven anatomies (chest, head, neck, sinus, spine, abdomen, and pelvis). MIMIC-RRS is based on the MIMIC-CXR (Johnson et al., 2019) and MIMIC-III (Johnson et al., 2016) datasets and introduces data from 12 new modality-anatomy pairs.
As a result, we introduce a new setting for evaluating the generalization capabilities of RRS models across different modalities and anatomies.
In addition, we benchmark various pre-trained language models on MIMIC-RRS. Through extensive experiments within and across modality-anatomy pairs, we show that adopting an appropriate pretrained model can achieve promising results comparable to previous studies. We also introduce a metric to evaluate factual correctness of generated summaries for any modality-anatomy pair.
## 2 Dataset Construction
In this section, we present the new MIMIC-RRS
dataset designed for radiology report summarization across multiple modalities and anatomies.
Comparisons with existing datasets are shown in
| Dataset | Anatomy | Modality | Language | Number |
|-------------------------------------|-----------|------------|------------|----------|
| Zhang et al. (2018) | Multiple | Multiple | English | 87,127 |
| Zhang et al. (2020) | Multiple | Multiple | English | 130,850 |
| RIH (Zhang et al., 2020) | Multiple | Multiple | English | 139,654 |
| OpenI (Demner-Fushman et al., 2016) | Chest | X-ray | English | 3,268 |
| MIMIC-CXR (Johnson et al., 2019) | Chest | X-ray | English | 128,003 |
| PadChest (Bustos et al., 2020) | Chest | X-ray | Spanish | 206,222 |
| MIMIC-RRS (ours) | Multiple | Multiple | English | 207,782 |
Table 1: Comparisons with existing datasets for radiology report summarization.
Table 1. We detail the collection process and the dataset statistics in the following subsections.
## 2.1 Data Collection
MIMIC-III One of our main contributions is to generate RRS data from MIMIC-II involving distinct combinations of modalities (*i.e.*, medical imaging techniques) and anatomies (*i.e.*, body parts). To this end, we first select five of the most frequently-occurring modality-anatomy pairs in the pool of MIMIC-III reports: "CT Head", "CT
Spine", "CT Chest", "CT Abdomen-Pelvis" and
"MR Head". Note that we discard chest X-rays as they are included in the MIMIC-CXR dataset. In addition, we pick six modality-anatomy pairs that occur infrequently in MIMIC-III to serve as out-ofdomain (OOD) test sets: "CT Neck", "CT Sinus",
"MR Pelvis", "MR Neck", "MR Abdomen", "MR
Spine". This set of pairs represents two types of OOD cases: (1) the modality has not been seen during training (one could train on CT neck and test on MR Neck), and (2) the anatomy has not been seen during training (for example, CT Sinus is the only "sinus" dataset).
For each report, we extract the findings and impression section. However, the findings section is not always clearly labeled as "findings". With the help of a board-certified radiologist, we identify alternate section headers that reference findings for each modality-anatomy pair. As an example, for CT
head, findings may be referenced in reports with the section headings "*non-contrast head ct*", "*ct head*",
"*ct head without contrast*", "*ct head without iv contrast*", "*head ct*", "*head ct without iv contrast*",
or "*cta head*". We identify 537 candidate section headers that reference findings across our dataset.
We also discarded reports where multiple studies are pooled in the same radiology report, leading to multiple intricate observations in the impression
| CT Abd-pelv | CT Chest | CT Head |
|---------------|------------|-------------|
| 15,989 | 12,786 | 31,402 |
| CT Spine | MR Head | CT Neck |
| 5,517 | 7,313 | 1,140 |
| CT Sinus | MR Spine | MR Abdomen |
| 1,267 | 2,821 | 1,061 |
| MR Neck | MR Pelvis | X-ray Chest |
| 230 | 253 | 128,003 |
section1. Our resulting dataset consists of 79,779 selected reports across 11 modality-anatomy pairs.
MIMIC-CXR MIMIC-CXR studies are chest Xray examinations. We follow preprocessing steps reported in previous work (Delbrouck et al., 2022b),
and we only include reports with both a Findings and an Impression section. This yields 128,003 reports.
## 2.2 Data Statistics
In total, there are 207,782 samples in the MIMICRRS dataset. The number of examples for each modality and anatomy is provided in Table 2. To further analyze this dataset, we report in Figure 1 the text lengths and vocabulary sizes associated with reports from each modality-anatomy pair. We find that for all modality-anatomy pairs, the findings section is significantly longer than the impression section (up to +315% for MR abdomen).
Additionally, the findings sections of chest X-ray reports, which average only 49 words, are much shorter than reports from other modality-anatomy 1We release our candidate section headers as well as code to recreate the dataset from scratch (Appendix B).
![2_image_0.png](2_image_0.png)
pairs. In contrast, MR Abdomen and MR Pelvis reports including findings sections that average 205 and 174 words, respectively. We see that CT Chest, CT Head, and CT Abdomen-Pelvis reports have a relatively large vocabulary size (given their sample size) with 20,909, 19,813, and 18,933 words.
Surprisingly, the CT Abdomen-Pelvis impressions include a larger vocabulary than the findings. On the other hand, MR pelvis and MR abdomen impressions contain 36% and 37% fewer words than their corresponding findings, respectively.
We assign reports from the following modalityanatomy pairs to training, validation, and test sets due to their large sample sizes: "CT abdomen/pelvis", "CT Chest", "CT Neck", "CT
Spine", "CT Head", "MR Head", and "X-ray Chest". The remaining reports (*i.e.*, "MR Pelvis",
"MR Spine", "MR Neck", "MR Abdomen", and
"CT Sinus") are used for OOD test sets2.
## 3 Algorithmic Analysis
In this section, we conduct experiments to analyze the performance of different models on MIMICRRS. We provide three categories of analyses: inmodality-anatomy, cross-modality-anatomy, and clinical efficacy.
## 3.1 In-Modality-Anatomy
To benchmark the performance of different models on the proposed MIMIC-RRS dataset, we conduct experiments within each modality-anatomy pair (*i.e.*, the training and test procedures are performed using only one modality-anatomy pair).
We evaluate three types of pre-trained sequence-tosequence models, namely T5 (Raffel et al., 2020),
BART (Lewis et al., 2020), BioBART (Yuan et al.,
2022), and their variants.3 Results are reported in 2We release data splits publicly so that future work can fairly compare new results.
3We do not evaluate several pre-trained models (e.g., ClinicalBERT (Alsentzer et al., 2019), BioClinicalBERT (Alsentzer
## Table 3.
Several observations can be drawn from these experiments. First, simply adopting pretrained sequence-to-sequence language models can achieve results comparable to previous state-of-theart approaches designed for radiology summarization. Indeed, using BART-L as a backbone achieves the best performance, confirming the necessity of exploiting appropriate pre-trained language models.
Secondly, the performances across different model types vary (i.e., BART-L/BART-B, BioBART-L/
BioBART-B). Yet, we notice that the number of training parameters matters; large models report the best results. According to our evaluations, the BART models achieve better results across all modality-anatomy pairs. Surprisingly, it is worth noting that the BioBART models do not achieve better results than BART, although BioBART is pre-trained on a biomedical corpus. One explanation could be that BioBART models are pre-trained on abstracts from PubMed, which are not within the same domain as radiology reports.
In summary, we note several key findings for future studies: (i) "*Less is more*": starting from an appropriate backbone instead of designing complicated modules; (ii) the model size matters; (iii) the pretraining domain matters: knowledge from clinical notes or medical literature does not easily translate to radiology reports.
## 3.2 Cross-Modality-Anatomy
In this section, we conduct experiments across modality-anatomy pairs (*i.e.*, models are trained on reports from a subset of modality-anatomy pairs and then evaluated on all pairs, including the OOD
test sets). We report the cross-modality-anatomy scores in Figure 2. A few interesting observations can be made. First, there are some associations between different anatomies and modalities. For example, the model trained on "CT Head" can also achieve promising results on the "MR Head" set.
Secondly, training the model with all the modalityanatomy pairs (denoted as ALL) achieves the best generalization, obtaining the best results across all modalities and anatomies including the OOD
test sets. We leave further exploration of crossmodality-anatomy associations and zero-shot OOD
et al., 2019), and Clinical-T5 (Lu et al., 2022)) that specialize in the clinical text since they were trained on the text from MIMIC-III, which overlaps with our dataset. The MIMICRRS test set is included in their pre-training data. Thus, we do not adopt them in our experiments to avoid potential data leakage and ensure a fair comparison.
| Models | MR Head | CT Spine | CT Neck | CT Head | CT Chest | CT Abd/Pel | X-ray Chest | | | | | | | | | | | | | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|------------|-----------|-----------|------------|--------------|---------------|----|----|----|----|----|----|----|----|----|----|----|----------------|----|
| R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL |
| WGSum | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 48.4 33.3 46.7 | |
| AIG-CL | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 51.0 35.2 46.7 | |
| T5-S | 38.2 18.3 28.5 35.8 18.6 28.9 39.0 20.0 29.1 43.1 25.3 36.5 39.5 18.5 29.3 28.9 10.6 21.2 47.8 32.2 43.5 | | | | | | | | | | | | | | | | | | | |
| BioBART-B 42.4 21.2 32.0 47.8 27.9 40.0 40.4 19.6 29.3 46.0 27.4 38.9 41.4 19.1 30.3 33.1 12.5 23.2 49.6 33.8 45.3 BioBART-L 42.1 21.4 32.6 47.8 28.1 40.8 40.3 19.4 29.6 45.5 26.7 38.6 40.2 17.8 28.9 32.5 11.7 22.6 49.3 33.3 44.9 BART-B 42.0 21.5 32.1 49.0 29.7 41.6 41.4 20.9 30.2 46.4 28.1 39.5 41.6 19.5 30.6 33.1 12.9 23.6 51.0 34.9 46.4 BART-L 43.7 22.1 32.8 49.8 29.7 41.4 42.0 20.5 30.4 46.6 27.3 39.0 41.8 18.6 29.6 33.9 12.4 23.2 51.7 34.9 46.8 | | | | | | | | | | | | | | | | | | | | |
![3_image_0.png](3_image_0.png)
| T5-S BioBART-B BioBART-L BART-B BART-L | | | | | |
|------------------------------------------|------|------|------|------|------|
| MR Head | 21.5 | 24.8 | 25.3 | 25.0 | 26.1 |
| CT Spine | 23.8 | 37.0 | 37.0 | 38.5 | 38.3 |
| CT Neck | 21.2 | 23.6 | 23.6 | 24.0 | 24.9 |
| CT Head | 31.8 | 34.2 | 34.0 | 35.2 | 34.7 |
| CT Chest | 24.0 | 26.0 | 24.3 | 26.0 | 25.2 |
| CT Abd/Pel 12.6 | 15.9 | 15.3 | 16.1 | 15.9 | |
| X-ray Chest 39.8 | 40.9 | 41.0 | 42.3 | 43.0 | |
## Transfer For Future Work. 3.3 Clinical-Efficacy
In addition to evaluating our systems using the ROUGE-1, ROUGE-2, and ROUGE-L metrics (Lin, 2004), we use a factual correctness metric to analyze clinical efficacy. Most prior works (Zhang et al., 2020; Smit et al., 2020; Hu et al., 2022) mainly use the F1CheXbert metric, an F1-score that evaluates the factual correctness of the generated impressions using 14 chest radiographic observations. Unfortunately, this metric is unsuitable for MIMIC-RRS, which contains reports from other modality-anatomy pairs beyond chest X-rays.
For this reason, instead of using F1CheXbert, we propose to use RadGraph (Jain et al., 2021) to evaluate the clinical correctness of the generated impressions. RadGraph is a dataset containing boardcertified radiologist annotations of radiology reports corresponding to 14,579 entities and 10,889 relations (Appendix A.1). We used the released pretrained model to annotate our reports and asked one board-certified radiologist to subjectively validate that the printed entities of the RadGraph model on our data are correct (examples are shown in Table 5). After confirming the effectiveness of the model, we follow Delbrouck et al. (2022a) to compute the F1-RadGraph scores. The score evaluates the correctness of the generated named entities in the hypothesis impression compared to the groundtruth impression. We report these results in Table 4. It can be observed that the BART models can achieve the best performance with respect to clinical efficacy. The results are consistent with the ROUGE scores, further confirming the effectiveness of adopting BART as the backbone instead of designing complicated solutions.
## 4 Related Work
In this section, we discuss prior research related to the radiology report summarization task. The first attempt at automatic summarization of radiology findings into natural language impression statements was proposed by Zhang et al. (2018).
Their contribution was to propose a first baseline on the task, using a bidirectional-LSTM as encoder and decoder. Importantly, they found that about 30% of the summaries generated from neural models contained factual errors. Subsequently, Zhang et al. (2020) proposed the F1CheXbert score to evaluate the factual correctness of the generated impression. They also used reinforcement learning to optimize the F1CheXbert score directly. Finally, both Hu et al. (2021) and Hu et al. (2022) used the Biomedical and Clinical English Model Packages in the Stanza Python NLP Library (Zhang et al., 2021) to extract medical entities. The former study used the entities to construct a Graph Neural Network, which was used as input in their summarization pipeline. In contrast, the latter study used the entities to mask the findings duringcontrastive pre-training.
We believe this paper is an original contribution to the aforementioned line of work. As instigated by Zhang et al. (2018), our goal is to release a new summarization corpus and baselines on new modalities and anatomies. We do so by releasing an RRS
dataset with data from 11 new modality-anatomy pairs. In addition, we extend the work performed by Zhang et al. (2020) by proposing a new metric to evaluates the factual correctness and completeness of the generated impression, namely the RadGraph score. Finally, we improve on the work of Hu et al.
(2021, 2022) in two ways: (1) we use semantic annotations from a pre-trained model trained using annotations from board-certified radiologists, as opposed to Stanza which leverages unsupervised biomedical and clinical text data; (2) we leverage relation annotations between entities, a feature that was not available in prior work.
## 5 Conclusion And Discussion
In this paper, we highlight and address several weaknesses associated with the radiology report summarization task. First, from a data perspective, we propose a *publicly available* dataset named MIMIC-RRS involving data samples from *twelve* modality-anatomy pairs, with 79,779 samples from MIMIC-III and 128,003 samples from MIMIC-CXR.
Second, we conducted more than 40 experiments and over 400 cross-modality-anatomy evaluations to benchmark the performance of different models.
We show that instead of designing complicated modules, we can start from an appropriate backbone model such as BART.
Finally, we proposed an elegant and simple metric, F1-RadGraph, to evaluate the factual correctness of summaries generated for any modality and anatomy. In the future, we hope that our work broadens the scope of the radiology report summarization task and contributes to the development of reliable RRS models that generalize well to new anatomies and modalities.
## Limitations
We note two limitations of our paper. First, our work does not extensively evaluate all the available pre-trained models that *could* be suitable for this task, e.g., ELECTRA (Clark et al.,
2020), BioLinkBERT (Yasunaga et al., 2022),
GatorTron (Yang et al., 2022), RadBERT (Yan et al., 2022), and PubMedBERT (Gu et al., 2021).
The aim of this work is not to report the strongest possible score but rather to address weaknesses of existing radiology report summarization studies (in terms of *data* and *evaluation*). Yet, we are confident our proposed solutions report a strong baseline for future work. Second, although F1-RadGraph seems like an appropriate metric to evaluate our new modalities and anatomies (and appears to be consistent with ROUGE scores), it has only been evaluated subjectively and not systematically.
## Acknowledgments
Maya Varma is supported by graduate fellowship awards from the Department of Defense (NDSEG)
and the Knight-Hennessy Scholars program at Stanford University.
## References
Robert Alexander, Stephen Waite, Michael A Bruno, Elizabeth A Krupinski, Leonard Berlin, Stephen Macknik, and Susana Martinez-Conde. 2022. Mandating limits on workload, duty, and speed in radiology. *Radiology*, 304(2):274–282.
Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. In *Proceedings of the 2nd Clinical Natural* Language Processing Workshop, pages 72–78.
Asma Ben Abacha, Yassine Mrabet, Yuhao Zhang, Chaitanya Shivade, Curtis Langlotz, and Dina DemnerFushman. 2021. Overview of the MEDIQA 2021 shared task on summarization in the medical domain. In *Proceedings of the 20th Workshop on Biomedical Language* Processing, pages 74–85, Online. Association for Computational Linguistics.
Mythreyi Bhargavan, Adam H Kaye, Howard P Forman, and Jonathan H Sunshine. 2009. Workload of radiologists in united states in 2006–2007 and trends since 1991–1992. *Radiology*, 252(2):458–467.
Aurelia Bustos, Antonio Pertusa, Jose-Maria Salinas, and Maria de la Iglesia-Vayá. 2020. Padchest: A large chest x-ray image dataset with multi-label annotated reports. *Medical image analysis*, 66:101797. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In *ICLR*.
Jean-Benoit Delbrouck, Pierre Chambon, Christian Bluethgen, Emily Tsai, Omar Almusa, and Curtis P Langlotz. 2022a. Improving the factual correctness of radiology report generation with semantic rewards.
arXiv preprint arXiv:2210.12186.
Jean-benoit Delbrouck, Khaled Saab, Maya Varma, Sabri Eyuboglu, Pierre Chambon, Jared Dunnmon, Juan Zambrano, Akshay Chaudhari, and Curtis Langlotz.
2022b. ViLMedic: a framework for research at the intersection of vision and language in medical AI. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 23–34, Dublin, Ireland. Association for Computational Linguistics.
Dina Demner-Fushman, Marc D Kohli, Marc B Rosenman, Sonya E Shooshan, Laritza Rodriguez, Sameer Antani, George R Thoma, and Clement J McDonald.
2016. Preparing a collection of radiology examinations for distribution and retrieval. Journal of the American Medical Informatics Association, 23(2):304–310.
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. *ACM Transactions on Computing* for Healthcare (HEALTH), 3(1):1–23.
Jinpeng Hu, Jianling Li, Zhihong Chen, Yaling Shen, Yan Song, Xiang Wan, and Tsung-Hui Chang. 2021.
Word graph guided summarization for radiology findings. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4980–4990, Online. Association for Computational Linguistics.
Jinpeng Hu, Zhuo Li, Zhihong Chen, Zhen Li, Xiang Wan, and Tsung-Hui Chang. 2022. Graph enhanced contrastive learning for radiology findings summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 4677–4688.
Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew Lungren, Andrew Ng, Curtis Langlotz, Pranav Rajpurkar, and Pranav Rajpurkar.
2021. Radgraph: Extracting clinical entities and relations from radiology reports. In *Proceedings of the Neural Information Processing Systems Track on Datasets* and Benchmarks, volume 1.
Alistair EW Johnson, Tom J Pollard, Seth J Berkowitz, Nathaniel R Greenbaum, Matthew P Lungren, Chihying Deng, Roger G Mark, and Steven Horng. 2019.
Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. *Scientific* data, 6(1):1–8. Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H
Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. *Scientific data*, 3(1):1–9.
Sanjeev Kumar Karn, Ning Liu, Hinrich Schütze, and Oladimeji Farri. 2022. Differentiable multi-agent actorcritic for multi-step radiology report summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1542–1553.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81.
Qiuhao Lu, Dejing Dou, and Thien Nguyen. 2022. Clinicalt5: A generative language model for clinical text. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5436–5443.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer.
J. Mach. Learn. Res., 21(140):1–67.
Akshay Smit, Saahil Jain, Pranav Rajpurkar, Anuj Pareek, Andrew Y Ng, and Matthew Lungren. 2020. Combining automatic labelers and expert annotations for accurate radiology report labeling using bert. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1500–
1519.
An Yan, Julian McAuley, Xing Lu, Jiang Du, Eric Y
Chang, Amilcare Gentili, and Chun-Nan Hsu. 2022.
Radbert: Adapting transformer-based language models to radiology. *Radiology: Artificial Intelligence*,
4(4):e210258.
Xi Yang, Nima PourNejatian, Hoo Chang Shin, Kaleb E
Smith, Christopher Parisien, Colin Compas, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, et al.
2022. Gatortron: A large clinical language model to unlock patient information from unstructured electronic health records. *arXiv preprint arXiv:2203.03540*.
Michihiro Yasunaga, Jure Leskovec, and Percy Liang.
2022. Linkbert: Pretraining language models with document links. In *Association for Computational Linguistics (ACL)*.
Hongyi Yuan, Zheng Yuan, Ruyi Gan, Jiaxing Zhang, Yutao Xie, and Sheng Yu. 2022. Biobart: Pretraining and evaluation of a biomedical generative language model. In *Proceedings of the 21st Workshop on Biomedical Language Processing*, pages 97–109. Yuhao Zhang, Daisy Yi Ding, Tianpei Qian, Christopher D Manning, and Curtis P Langlotz. 2018. Learning to summarize radiology findings. In Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis, pages 204–213.
Yuhao Zhang, Derek Merck, Emily Tsai, Christopher D
Manning, and Curtis Langlotz. 2020. Optimizing the factual correctness of a summary: A study of summarizing radiology reports. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5108–5120.
Yuhao Zhang, Yuhui Zhang, Peng Qi, Christopher D
Manning, and Curtis P Langlotz. 2021. Biomedical and clinical english model packages for the stanza python nlp library. *Journal of the American Medical Informatics Association*, 28(9):1892–1899.
## A Details Of Radgraph Scores A.1 The Introduction Of Radgraph
![7_Image_0.Png](7_Image_0.Png)
To design our new evaluation metric, we leverage the RadGraph dataset (Jain et al., 2021) containing board-certified radiologist annotations of chest Xray reports, which correspond to 14,579 entities and 10,889 relations. RadGraph has released a PubMedBERT model (Gu et al., 2021) pre-trained on these annotations to annotate new reports. An example of annotation can be seen in Figure 3.
Before moving on to the next section, we quickly describe the concept of entities and relations:
Entities An entity is defined as a continuous span of text that can include one or more adjacent words.
Entities in RadGraph center around two concepts:
Anatomy and *Observation*. Three uncertainty levels exist for *Observation*, leading to four different entities: Anatomy (ANAT-DP), Observation: Definitely Present (OBS-DP), Observation: Uncertain (*OBSU*), and Observation: Definitely Absent (*OBS-DA*).
Relations A relation is defined as a directed edge between two entities. Three levels exist: *Suggestive* Of (., .), *Located At (., .)*, and *Modify (., .)*.
## A.2 Metric Computation
Using the RadGraph annotation scheme and pretrained model, we designed an F-score style reward that measures the factual consistency and completeness of the generated impression (also called hypothesis impression) compared to the reference impression.
To do so, we treat the RadGraph annotations of an impression as a graph G(*V, E*) with the set of nodes V = {v1, v2*, . . . , v*|V |} containing the entities and the set of edges E = {e1, e2*, . . . , e*|E|}
the relations between pairs of entities. The graph is directed, meaning that the edge e = (v1, v2) ̸=
(v2, v1). An example is depicted in Figure 4. Each node or edge of the graph also has a label, which we denote as viL
for an entity i (for example "OBSDP" or "ANAT") and eijL
for a relation e = (vi, vj )
(such as "modified" or "located at").
To design our RadGraph score, we focus on the nodes V and whether or not a node has a relation in E. For a hypothesis impression y, we create a new set of triplets Ty = {(vi, viL
, R)}i=1:|V |. The value R is 1 if (vi, vj )j=1:|E|,i̸=j ∈ E, 0 otherwise. In other words, a triplet contains an entity, the entity label, and whether or not this entity has a relation. We proceed to construct the same set for the reference report yˆ and denote this set Tyˆ.
Finally, our score is defined as the harmonic mean of precision and recall between the hypothesis set Ty and the reference set Tyˆ, giving a value between 0 and 100. As an illustration, the set V , E and T
of the graph G in Figure 4 are shown as follows:
V = {mild, fluid, overload, overt, pulmonary, edema}
E = {(mild,overload), (overload, fluid), (edema, pulmonary)}
T = {(mild, obs-dp, 1), (fluid, obs-dp, 0), (overload, obs-dp, 1), (overt, obs-da, 0), (pulmonary, anat-dp, 0), (edema, obs-da, 1)}
![7_image_1.png](7_image_1.png)
## B Code And Data Release
Our research has been carried out using the ViLMedic library (Delbrouck et al., 2022b). Our code is available at https://github.com/jbdel/
vilmedic. This link is anonymized and complies with the double-blind review process. More specifically, we release the code of the RadGraph score as well as the training of our baseline. We also release the script to download, pre-process, and split the radiology reports of the MIMIC-III database
| CT Spine | CT Sinus | MR Neck | MR Head |
|--------------------------------------------------|-------------------------------------------------------|--------------------------------------------------|------------------------------------------|
| slightly consers prominent consers lymph | 1. no acute consequischemia conseque . 2 . age | | |
| olution study reveals depenerative EEE | 1. sinusitis construction affecting the left property | | |
| - change formal and forminal form | [0000] node [Content in the posterior [China] | | |
| sphenoid investment and ethmoid instances | - appropriate [ COSED - appropriate atro | | |
| towing Europe without grass careers | | | |
| acute gare pathology concer | sinus constrain. 2 . opacification constrain of | chain parages on the left side program side | ensell , and chronic to see small conser |
| bilateral (museu) mastoid (control) air cells | uchanged consequ from previous examination | vessel [ 2002 ] Ischemic [ 2002 ] changes | |
| , no definite evidence of infiltrating research | [ 0 ] . 3 . there is no occlusion [ 0 ] or | | |
| and fluid transform seen in the middle transform | | | |
| ear (control) cavities (control) which may | mass consul or definite pathologic | flow - limiting consers - limiting stenosis con- | |
| adenopathy Crosses | | | |
| indicate infection [ Cross ]. | m of the arterial groups system senses of | | |
| the head and neck | | | |
as per our experiments. To download the MIMIC-
III database, researchers are required to formally request access via a process documented on the MIMIC website. There are two key steps that must be completed before access is granted: (i) the researcher must complete a recognized course in protecting human research participants, including Health Insurance Portability and Accountability Act (HIPAA) requirements. (ii) the researcher must sign a data use agreement, which outlines appropriate data usage and security standards, and forbids efforts to identify individual patients.
## C More Results
We present the results (including four metrics, i.e. ,
ROUGE-1, ROUGE-2, ROUGE-L, and RadGraph scores) of all the experiments on Figure 5-9 for further research in this fi eld. We also show the output of RadGraph (for entities) on a few samples of our new dataset in Table 5.
## Ethics Statement D
The MIMIC-CXR and MIMIC-III datasets are deidentified to satisfy the US Health Insurance Portability and Accountability Act of 1996 (HIPAA)
Safe Harbor requirements. Protected health information (PHI) has been removed.
Therefore, the ethical approval statement and the need for informed consent were waived for the studies on this database, which was approved by the Massachusetts Institute of Technology (Cambridge, MA) and Beth Israel Deaconess Medical Center
(Boston, MA). This research was conducted in accordance with the Declaration of Helsinki, describing the ethical principles of medical research involving human subjects.
![9_image_0.png](9_image_0.png) ![10_image_0.png](10_image_0.png)
![11_image_0.png](11_image_0.png)
![12_image_0.png](12_image_0.png) ![13_image_0.png](13_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
On Page 5.
✓ A2. Did you discuss any potential risks of your work?
On Page 5.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
On Pages 1 and 4.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2.
✓ B1. Did you cite the creators of artifacts you used?
Section 2.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
On Page 5.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 2.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 2.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 2.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2.
## C ✓ **Did You Run Computational Experiments?** Section 3.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. We use the common pre-trained models in our experiments.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Sections 2 and 3.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 2.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 2.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 2.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Section 2 and Page 5.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 2. |
blankemeier-etal-2023-efficient | Efficient Diagnosis Assignment Using Unstructured Clinical Notes | https://aclanthology.org/2023.acl-short.42 | Electronic phenotyping entails using electronic health records (EHRs) to identify patients with specific health outcomes and determine when those outcomes occurred. Unstructured clinical notes, which contain a vast amount of information, are a valuable resource for electronic phenotyping. However, traditional methods, such as rule-based labeling functions or neural networks, require significant manual effort to tune and may not generalize well to multiple indications. To address these challenges, we propose \textit{HyDE} (hybrid diagnosis extractor). HyDE is a simple framework for electronic phenotyping that integrates labeling functions and a disease-agnostic neural network to assign diagnoses to patients. By training HyDE{'}s model to correct predictions made by labeling functions, we are able to disambiguate hypertension true positives and false positives with a supervised area under the precision-recall curve (AUPRC) of 0.85. We extend this hypertension-trained model to zero-shot evaluation of four other diseases, generating AUPRC values ranging from 0.82 - 0.95 and outperforming a labeling function baseline by 44 points in F1 score and a Word2Vec baseline by 24 points in F1 score on average. Furthermore, we demonstrate a speedup of {\textgreater}4x by pruning the length of inputs into our language model to {\textasciitilde}2.3{\%} of the full clinical notes, with negligible impact to the AUPRC. HyDE has the potential to improve the efficiency and efficacy of interpreting large-scale unstructured clinical notes for accurate EHR phenotyping. | Efficient Diagnosis Assignment Using Unstructured Clinical Notes Louis Blankemeier Stanford University [email protected] Jason Fries Stanford University [email protected] Robert Tinn Microsoft Health AI
[email protected] Sam Preston Microsoft Health AI
[email protected] Nigam Shah Stanford University [email protected] Akshay Chaudhari Stanford University [email protected]
## Abstract
Electronic phenotyping entails using electronic health records (EHRs) to identify patients with specific health outcomes and determine when those outcomes occurred. Unstructured clinical notes, which contain a vast amount of information, are a valuable resource for electronic phenotyping. However, traditional methods, such as rule-based labeling functions or neural networks, require significant manual effort to tune and may not generalize well to multiple indications. To address these challenges, we propose HyDE (hybrid diagnosis extractor). HyDE is a simple framework for electronic phenotyping that integrates labeling functions and a diseaseagnostic neural network to assign diagnoses to patients. By training HyDE's model to correct predictions made by labeling functions, we are able to disambiguate hypertension true positives and false positives with a supervised area under the precision-recall curve (AUPRC)
of 0.85. We extend this hypertension-trained model to zero-shot evaluation of four other diseases, generating AUPRC values ranging from 0.82 - 0.95 and outperforming a labeling function baseline by 44 points in F1 score and a Word2Vec baseline by 24 points in F1 score on average. Furthermore, we demonstrate a speedup of > 4× by pruning the length of inputs into our language model to ∼ 2.3% of the full clinical notes, with negligible impact to the AUPRC. HyDE has the potential to improve the efficiency and efficacy of interpreting largescale unstructured clinical notes for accurate EHR phenotyping.
## 1 Introduction
The widespread adoption of electronic health records (EHRs) by health systems has created vast clinical datastores. One of the essential steps in utilizing these data is identifying patients with specific clinical outcomes and the timing of these outcomes, through a process called electronic phenotyping (Banda et al., 2018). Electronic phenotyping is critical for using EHR data to support clinical care (Kaelber et al., 2012; LePendu et al., 2012), inform public health decision-making (Dubberke et al., 2012), and train predictive models (Chaves et al., 2021; Blankemeier et al., 2022; Steinberg et al., 2021, 2023; Lee et al., 2022).
Electronic phenotyping is a complex task that involves combining structured data (e.g. lab results and codes) with unstructured data (e.g. clinical notes). Rule-based heuristics can be applied to structured data. However, the unstructured nature of information rich (Kern et al., 2006; Wei et al.,
2012; Martin-Sanchez and Verspoor, 2014) clinical notes makes phenotyping based on these notes particularly challenging.
Several solutions exist for electronic phenotyping using unstructured clinical notes (Peng et al.,
2018; Fries et al., 2021; Zhang et al., 2021a,b), but lack convenience for generalizing to new conditions. For example, labeling functions that consist of rules authored by domain experts are interpretable and readily shared without compromising data privacy, but can be laborious to create. Neural networks (NNs) that are trained to identify specific diseases can eliminate the need for handcrafted labeling functions and often provide more accurate results. However, NNs require extensive manual labeling time and often generalize poorly to diseases not seen during training.
To address this, we introduce HyDE (hybrid diagnosis extractor). HyDE is a simple approach to electronic phenotyping that combines the strengths of labeling functions and neural networks and allows for generalization to new diseases with minimal overhead.
Our key contributions are as follows:
1. We demonstrate that our model effectively discriminates between true cases of hypertension and false positives generated by labeling functions, as demonstrated by a supervised area under the precision recall curve (AUPRC) of
485 0.85. This same model achieves AUPRCs of
![1_image_0.png](1_image_0.png)
0.90, 0.82, 0.84, and 0.95 in zero-shot evaluations for *diabetes, osteoporosis, chronic kidney disease*, and *ischemic heart disease*, respectively. HyDE outperforms a labeling function baseline by 44 points in F1 score and a Word2Vec baseline (Mikolov et al., 2013b,a)
by 24 points in F1 score on average across seen and unseen diseases.
2. HyDE requires minimal setup. The labeling functions used in HyDE can be simple, reducing the manual effort often required to design labeling functions with high precision and recall.
3. HyDE is computationally efficient, as only small portions of a subset of clinical notes need to be passed through the neural network for processing, thus minimizing the computational resources required to run HyDE on large datasets. We show that pruning the length of the inputs by 4× to just 2.3% of the full clinical notes impacts performance by an average of only 0.017 AUPRC while providing a speedup of > 4×.
## 2 Methods
Our proposed method, HyDE (hybrid diagnosis extractor), aims to accurately identify the earliest occurrence of specific diseases in clinical patient encounter notes. We accomplish this by using a combination of labeling functions and a fine-tuned biomedical language model. The labeling functions are designed to be simple and identify as many mentions of the disease as possible, including false positives. The neural network is then used to differentiate between the true positives and false positives by analyzing small segments of the clinical notes around the location identified by the labeling functions. This approach allows for identifying potential mentions of the disease, while also utilizing the neural network to improve precision. It is worth noting that the components of HyDE are modular, allowing for the substitution of other methods for identifying disease-specific mentions beyond the labeling functions used in this paper. For example, Trove (Fries et al., 2021), offers ontology-based labeling functions that eliminate the need for coding task-specific labeling rules.
Our method (Fig. 1), involves the following steps: The user first develops a simple *labeling* function for the disease of interest. In the case of diabetes, this could be the regular expression diabetes | diabetic. This labeling function is then applied to the clinical notes to identify mentions of the disease. Additionally, the user identifies *peripheral terms* that frequently appear before or after mentions of the disease, such as insulin-dependent or mellitus in the case of diabetes. The text matching the labeling function and peripheral terms are then replaced with
[MASK], and a context around the resulting mask is extracted, resulting in a *masked contextual mention (MCM)*. These MCMs are used to fine-tune a biomedical language model to determine whether the context suggests that the patient actually has the condition in question. We hypothesize that this approach allows the language model to generalize to various conditions without additional training.
Thus, for a zero-shot transfer to other diseases, only a simple disease-specific labeling function and peripheral terms are required. We adopt the term zero-shot in this context as each disease comes with distinct comorbidities, symptoms, and interventions.
## 2.1 Dataset
After obtaining approval from the institutional review board, we obtained ∼8.8 million clinical notes from 23,467 adult patients who had an encounter at our tertiary care center between 2012 and 2018.
## 2.2 Disease Phenotypes
We apply our electronic phenotyping method to five chronic diseases: hypertension (HTN), diabetes mellitus (DM), osteoporosis (OST), chronic kidney disease (CKD), and ischemic heart disease
(IHD). These diseases were selected due to their high prevalence (HTN, 2021; DM, 2022; CKD,
2021; IHD, 2022; Clynes et al., 2020), the costs they incur to the healthcare system, and the potential for positive intervention (Blankemeier et al.,
2022). For initial model training, we used hypertension as it is the most prevalent of these diseases
(affecting 116 million in the US) (HTN, 2021) and we hypothesize that it generates the most diverse MCMs. Table 6 shows the labeling functions that we used to extract these mentions for each disease.
## 2.3 Data Labeling
Mask Contextual Mention Categories: We manually identified 6 categories of MCMs - (0) true positive; (1) false positive (otherwise unspecified);
(2) referring to someone other than the patient; (3)
referring to the patient but negated; (4) providing information / instructions / conditional statements
(i.e. instructions for how to take a medication); (5)
uncertain (i.e. differential diagnosis). Thus, category 0 is the true positive category and categories 1
- 5 are false positive categories. We formulate this problem as a binary classification where categories 1 - 5 are merged into class 1.
Amplifying False Positive Examples: The prevalence of false positives from our labeling functions were relatively low (Table 3). We thus sought to increase the number of category 2 false positive examples in our training dataset beyond the baseline prevalence of the 250 random MCM samples that were initially labeled (RS in Table 1). We applied a family labeling function to randomly sampled MCMs. This labeling function is positive if an MCM contains any term listed in A.1 relating to familial mentions. We generated 200 such category 2 amplified examples for subsequent labeling. Based on the annotations, we found that only 1.5% of the examples selected by this labeling function were actually true positives examples.
To increase the number of category 3 false positive examples, we applied the Negex algorithm (Chapman et al., 2001) to a separate set of randomly sampled masked contextual mentions.
For further details see A.2. Based on manual annotation of 200 such examples, we found that 22%
of the examples selected by this labeling function were actually true positive examples.
Filtering Masked Contextual Mentions: Applying the disease-specific labeling functions generated 827k, 555k, 87k, 199k, and 80k notes for HTN, DM, OST, CKD, and IHD respectively from roughly 8.1 million clinical notes (Table 4). Since clinical notes often contain duplicate information from multiple patient visits, we deduplicate the MCMs by comparing the 20 characters on either side of the masked mentions associated with a particular patient. If these characters are the same across multiple MCMs, we keep the MCM that was authored first and discard the others. Deduplication allows us to reduce the number of masked contextual mentions by 3.3×, 3.6×, 4.2×, 3.7×,
and 3.3× for HTN, DM, OST, CKD, and IHD respectively (Table 4). This method can be applied at inference to increase the computational efficiency of HyDE. Additionally, the length and number of MCMs per clinical note represents an average of 9% of the full notes for a context length of 64 words, which can improve the efficiency of inference on large datasets.
Active Learning: To further improve the performance of HyDE, we implement a human-inthe-loop uncertainty-based active learning strategy.
This involves multiple iterations of training where after each iteration, 100 examples with corresponding probabilities closest to 0.5 are manually labeled and added to the training dataset for the next training iteration. Table 1 shows performance across the active learning iterations (A1-A4).
## 2.4 Model Training
We select PubMedBERT (Gu et al., 2021) (100 million parameters) as the model that we fine-tune due to its simple architecture and widespread validation.
We use a train batch size of 8, an Adam optimizer with β1 = 0.9 and β2 = 0.999, and a learning rate of 3e-5. We train for 25 epochs and choose the model checkpoint with the best validation set performance. 1,150 HTN examples are used for training and 250 HTN examples are used for validation. For disease specific fine-tuning experiments, between 90 and 100 disease-specific examples are used for both validation and training. There was no overlap between the patients used for the hypertension training and validation sets and the patients used for test sets as well as disease-specific validation sets. Our test sets consisted of 442 - 500 labeled cases for each disease.
## 2.5 Evaluation
While labeling functions can be evaluated at a note level, we evaluate at a MCM-level since a single clinical note can consist of multiple MCMs.
Furthermore, disease assignment based on clinical notes can be combined with assignment based on structured EHR, increasing the number of patients that are identified. Thus, we want to ensure high precision in identifying patients using clinical notes. For each MCM, we measure the fine-tuned language model's ability to correctly classify it as either true positive or false positive using area under the precision recall curve (AUPRC) and F1.
For our labeling function baseline (LF in Table 2), we use both the family labeling function described previously and Negex (Chapman et al.,
2001). Although additional terms could be added to this labeling function, those same terms could also be added to HyDE, making this a fair comparison.
We also include a Word2Vec baseline in our comparison (Mikolov et al., 2013b,a). This technique leverages a pre-trained model which has been trained on a corpus of around 100 billion words from Google News. For each MCM, we aggregate word embeddings by calculating their mean and then train an XGBoost model (Chen and Guestrin, 2016) over the computed averages of the HTN training dataset MCM embeddings. To optimize the performance of our XGBoost model, we fine-tune its hyperparameters by conducting a grid search using our HTN validation dataset. It's worth mentioning that this strategy does not retain the sequential order of words.
To demonstrate the generalizability of our method on external data, we apply it to the assertion classification task from the 2010 i2b2/VA Workshop on Natural Language Processing (Uzuner et al., 2011). This dataset consists of 871 progress reports annotated with medical problems that are further classified as present, absent, possible, conditional, hypothetical, or not associated with the patient. We mapped the present category to class 0 and collated all other categories under class 1.
Method SL Zero-Shot
HTN DM OST CKD IHD
W2V* 0.52 0.70 0.53 0.59 0.83 RS 0.60 0.73 0.59 0.71 0.82
RS+C 0.77 0.85 0.65 0.75 0.92 RS+C+A1 0.75 0.86 0.72 0.81 0.95 RS+C+A2 0.82 0.88 0.76 0.84 0.96 RS+C+A3 0.83 0.89 0.77 0.86 0.96
RS+C+A4 0.85 0.90 0.82 0.84 0.95
We used regular expressions to extract mentions of HTN, DM, OST, CKD, and IHD. We filtering out diseases with less than 30 mentions. Consequently, our external validation was conducted on HTN, DM, and CKD.
## 3 Results Supervised And Zero-Shot Model Performance:
Table 1 depicts AUPRC performance of our Word2Vec (W2V) baseline compared to fine-tuned PubMedBERT models trained with various training dataset compositions (all rows except the first). We demonstrate supervised performance on HTN, as well as zero-shot generalization to DM, OST, CKD,
and IHD. The performance of HyDE surpasses that of our labeling function baseline by 44 points in F1 score and our Word2Vec baseline by 24 points in F1 score on average (Table 2). We find that fine-tuning the best PubMedBERT model (RS+C+A4 training dataset) on ∼100 additional disease-specific examples does not significantly improve performance, with scores of 0.91, 0.84, 0.81, and 0.95 on DM,
OST, CKD, and IHD, respectively. This supports the conclusion that our model generalizes well to other diseases, without requiring disease-specific fine-tuning. On the external i2b2/VA dataset we achieve the following AUPRC scores without any additional finetuning - 0.79 for HTN (336 patients),
0.99 for DM (213 patients), and 0.95 for CKD (45 Table 2: F1 score comparison of the labeling function baseline (LF), the Word2Vec (W2V) baseline, and the RS+C+A4 fine-tuned PubMedBERT model. * indicates that the W2V baseline was trained using the full RS+C+A4 dataset.
| Method | SL | Zero-Shot | | | |
|----------|------|-------------|------|------|------|
| HTN | DM | OST | CKD | IHD | |
| LF | 0.39 | 0.41 | 0.18 | 0.28 | 0.48 |
| W2V* | 0.41 | 0.61 | 0.48 | 0.54 | 0.68 |
| RS+C+A4 | 0.74 | 0.81 | 0.75 | 0.74 | 0.89 |
![4_image_0.png](4_image_0.png)
## Patients).
Context Length Ablation: Fig. 2 shows that RS+C+A4 (RS: 250 random MCM samples; C:
400 category 2 and 3 amplified MCMs; A4: 400 samples from active learning) trained models saturate with increasing context lengths. Table 5 shows that reducing the context length from 64 words to 16 words speeds up the model by 4.5x while only lowering average AUPRC by 0.017. From Table 4 we observe that this represents an average of 2.3%
of the full clinical notes among notes that contain at least one MCM.
## 4 Conclusion
With its minimal setup, computational efficiency, and generalization capability, HyDE offers a promising tool for electronic phenotyping from unstructured clinical notes. By improving the ability to extract patient health status, we hope that HyDE
will enable more informative large scale studies using EHR data, ultimately leading to public health insights and improved patient care.
## 5 Limitations
HyDE has yet to be tested in a large-scale and multisite setting, which may offer more generalization challenges. Furthermore, an evaluation of notelevel classification performance was not conducted.
Although we expect that HyDE would perform well under such an evaluation, this would require heuristics to aggregate multiple MCMs per note.
## 6 Ethics Statement
The authors have carefully considered the implications of their work, including potential positive and negative impacts. A potential risk associated with this approach would be the leakage of protected health information (PHI) following a release of the model. To mitigate this risk, we will conduct a thorough review of the training data and consult with experts before deciding to release the model.
Additionally, the authors have reviewed the ACM
Code of Ethics and Professional Conduct document and attest that this work adheres to the principles outlined in that document.
## References
2021. Chronic kidney disease in the united states, 2021.
2021. Facts about hypertension.
2022. Heart disease facts.
2022. National diabetes statistics report.
Juan M Banda, Martin Seneviratne, Tina HernandezBoussard, and Nigam H Shah. 2018. Advances in electronic phenotyping: from rule-based definitions to machine learning models. *Annual review of* biomedical data science, 1:53.
Louis Blankemeier, Isabel Gallegos, Juan Manuel Zambrano Chaves, David Maron, Alexander Sandhu, Fatima Rodriguez, Daniel Rubin, Bhavik Patel, Marc Willis, Robert Boutin, et al. 2022. Opportunistic incidence prediction of multiple chronic diseases from abdominal ct imaging using multi-task learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 309–318. Springer.
Wendy W Chapman, Will Bridewell, Paul Hanbury, Gregory F Cooper, and Bruce G Buchanan. 2001. A
simple algorithm for identifying negated findings and diseases in discharge summaries. *Journal of biomedical informatics*, 34(5):301–310.
Juan M Zambrano Chaves, Akshay S Chaudhari, Andrew L Wentland, Arjun D Desai, Imon Banerjee, Robert D Boutin, David J Maron, Fatima Rodriguez, Alexander T Sandhu, R Brooke Jeffrey, et al. 2021.
Opportunistic assessment of ischemic heart disease risk using abdominopelvic computed tomography and medical record data: a multimodal explainable artificial intelligence approach. *medRxiv*.
Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A
scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785–
794.
Michael A Clynes, Nicholas C Harvey, Elizabeth M
Curtis, Nicholas R Fuggle, Elaine M Dennison, and Cyrus Cooper. 2020. The epidemiology of osteoporosis. *British Medical Bulletin*.
Erik R Dubberke, Humaa A Nyazee, Deborah S
Yokoe, Jeanmarie Mayer, Kurt B Stevenson, Julie E
Mangino, Yosef M Khan, Victoria J Fraser, et al.
2012. Implementing automated surveillance for tracking clostridium difficile infection at multiple healthcare facilities. *Infection Control & Hospital* Epidemiology, 33(3):305–308.
Jason A Fries, Ethan Steinberg, Saelig Khattar, Scott L
Fleming, Jose Posada, Alison Callahan, and Nigam H
Shah. 2021. Ontology-driven weak supervision for clinical entity classification in electronic health records. *Nature communications*, 12(1):1–11.
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1–23.
David C Kaelber, Wendy Foster, Jason Gilder, Thomas E Love, and Anil K Jain. 2012. Patient characteristics associated with venous thromboembolic events: a cohort study using pooled electronic health record data. *Journal of the American Medical* Informatics Association, 19(6):965–972.
Elizabeth FO Kern, Miriam Maney, Donald R Miller, Chin-Lin Tseng, Anjali Tiwari, Mangala Rajan, David Aron, and Leonard Pogach. 2006. Failure of icd-9-cm codes to identify patients with comorbid chronic kidney disease in diabetes. Health services research, 41(2):564–580.
Matthew H Lee, Ryan Zea, John W Garrett, Peter M
Graffy, Ronald M Summers, and Perry J Pickhardt.
2022. Abdominal ct body composition thresholds using automated ai tools for predicting 10-year adverse outcomes. *Radiology*, page 220574.
Paea LePendu, Srinivasan V Iyer, Cédrick Fairon, and Nigam H Shah. 2012. Annotation analysis for testing drug safety signals using unstructured clinical notes.
In *Journal of biomedical semantics*, volume 3, pages 1–12. Springer.
Fernando Martin-Sanchez and Karin Verspoor. 2014.
Big data in medicine is driving big changes. *Yearbook of medical informatics*, 23(01):14–20.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality.
Advances in neural information processing systems, 26.
Yifan Peng, Xiaosong Wang, Le Lu, Mohammadhadi Bagheri, Ronald Summers, and Zhiyong Lu.
2018. Negbio: a high-performance tool for negation and uncertainty detection in radiology reports.
AMIA Summits on Translational Science Proceedings, 2018:188.
Ethan Steinberg, Ken Jung, Jason A Fries, Conor K
Corbin, Stephen R Pfohl, and Nigam H Shah. 2021.
Language models are an effective representation learning technique for electronic health record data.
Journal of Biomedical Informatics, 113:103637.
Ethan Steinberg, Yizhe Xu, Jason Fries, and Nigam Shah. 2023. Self-supervised time-to-event modeling with structured medical records. arXiv preprint arXiv:2301.03150.
Özlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text.
Journal of the American Medical Informatics Association, 18(5):552–556.
Wei-Qi Wei, Cynthia L Leibson, Jeanine E Ransom, Abel N Kho, Pedro J Caraballo, High Seng Chai, Barbara P Yawn, Jennifer A Pacheco, and Christopher G Chute. 2012. Impact of data fragmentation across healthcare centers on the accuracy of a highthroughput clinical phenotyping algorithm for specifying subjects with type 2 diabetes mellitus. Journal of the American Medical Informatics Association, 19(2):219–224.
Jingqing Zhang, Luis Bolanos Trujillo, Tong Li, Ashwani Tanwar, Guilherme Freire, Xian Yang, Julia Ive, Vibhor Gupta, and Yike Guo. 2021a. Self-supervised detection of contextual synonyms in a multi-class setting: Phenotype annotation use case. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8754–8769, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021b.
Knowledge-rich self-supervision for biomedical entity linking. *arXiv preprint arXiv:2112.07887*.
## A Appendix A.1 Family Labeling Function
The family labeling function is positive if any of the following terms match within a masked con-
Cat Type HTN DM OST CKD IHD
0 + 87.6 74.0 79.2 77.6 46.2
1 - 0.6 0.6 1.8 0.6 0.4
2 - 4.4 11.6 4.4 3.0 17.6
3 - 2.8 4.8 1.8 4.4 14.8
4 - 0.8 5.6 6.8 6.4 15.4
5 - 3.8 3.4 6.0 8.0 5.4
textual mention: relative, relatives, family, father, mother, grandmother, grandfather, sister, brother, sibling, aunt, uncle, nephew, niece, son, daughter, cousin, parents.
## A.2 Negex Algorithm
In order to increase the recall of the Negex (Chapman et al., 2001) algorithm for manual labeling in order to amplify false positives for HyDE training, we modified it slightly to allow negative terms to match within 7 words of the mention, rather than 5. However, for the labeling function baseline we used Negex with a conventional window of 5 words, as opposed to the 7 word window used during HyDE
training.
We modify the Negex keywords slightly based on manual examination of the MCMs.
The original keywords were extracted from the negspaCy en_clinical termset. This function is positive if any of the following terms appear within the specified number of words before the disease mention: declined, denied, denies, denying, no sign of, no signs of, not, not demonstrate, symptoms atypical, doubt, negative for, no, versus, without, doesn't, doesnt, don't, dont, didn't, didnt, wasn't, wasnt, weren't, werent, isn't, isnt',
aren't, arent, cannot, can't, cant, couldn't, couldnt', never, none, resolved, absence of or if any of the following terms appear within the specified number of words after the disease mention: declined, unlikely, was not, were not, wasn't, wasnt, weren't, werent, not, no, none.
## A.3 **Qualitative Evaluation Of Active Learning** Examples
Qualitatively, the examples surfaced during active learning appear to be challenging cases. For example, some were examples that would have been counted as false positives by Negex but shouldn't be. One such example is "Insulin dependent diabetes mellitus ¬ø [MASK] No past medical history pertinent negatives". Here, ¬ø denotes a deidentified date. Another challenging example is
"4. Screening for [MASK]". Often when items are enumerated, they indicate a positive diagnosis. However, in this case, the patient was only screened for the condition.
Table 4: HyDE neural network computational efficiency. For reference, the average length of the 8.8 million clinical notes in the dataset is 375 words. We filter these 8.8 million notes down to 8.1 million notes by note type.
We include the most common note types in our dataset: "Progress Note", "Inpatient", "ED Note", "Consultation Note", "Letter", "Other Note", "Nursing Sign Out Note", "History and Physical", "Outpatient", "IP Consult", and
"Discharge/Transfer Summary". The number of MCMs generated after deduplication based on local context of 20 characters is shown below. These numbers vary depending on the exact form of the labeling functions used.
| Metric | HTN | DM | OST | CKD | IHD |
|------------------------------------------------|--------|--------|-------|-------|-------|
| Before deduplication Number of notes with MCMs | 827k | 555k | 87k | 199k | 80k |
| Number of MCMs | 1,616k | 1,264k | 127k | 449k | 125k |
| MCMs per note | 2.0 | 2.3 | 1.5 | 2.3 | 1.6 |
| Average size of notes with MCMs (words) | 1,256 | 1,247 | 1,508 | 1,374 | 1,473 |
| % notes represented by MCMs (64 word context) | 10% | 12% | 6% | 11% | 7% |
| Number of MCMs after deduplication | 495k | 353k | 30k | 120k | 38k |
| MCM reduction through deduplication | 3.3x | 3.6x | 4.2x | 3.7x | 3.3x |
Table 5: Inference time versus context length. All experiments are performed on a single 12GB Titan Xp GPU.
Analysis is done using 15,000 MCMs and the times reported are the total time spent for each task while processing the 15,000 MCMs. Batchsizes are increased in increments of 100 until they no longer fit on the GPU.
| Context length (words) | 16 | 32 | 64 |
|-----------------------------------|-------|-------|-------|
| Batch size (MCMs) | 3800 | 2000 | 1000 |
| Total inference time (s) | 17.47 | 43.33 | 79.07 |
| Data transfer CPU to GPU time (s) | 14.92 | 39.23 | 72.08 |
| Tokenization time (s) | 0.81 | 1.10 | 1.63 |
| Model run time (s) | 0.44 | 1.58 | 4.09 |
| MCMs / second | 859 | 346 | 190 |
Table 6: Labeling functions used to extract masked contextual mentions. HTN, DM, OST, CKD, and IHD stand for hypertension, diabetes, osteoporosis, chronic kidney disease, and ischemic heart disease respectively.
| HTN | (\s+hypertension)|(\s+HTN) |
|-------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| DM | (\s+diabetes)|(\s+DM2)|(\s+DM\s+)|(\s+T2DM) |
| OST | (\s+osteoporosis\s+)|(\s+osteoporotic\s+) |
| CKD | (\s+kidney failure)|(\s+nephropathy)|(\s+CKD\s+)|(\s+kidney disease)| (\s+chronic kidney disease)|(\s+renal disease)|(\s+ESRD\s+) |
| IHD | (\s+NSTEMI\s+)|(\s+myocardial ischemia)|(\s+ischemic heart disease)| (\s+cardiac ischemia)|(\s+myocardial infarction)|(\s+myocardial necrosis)| (\s+coronary heart disease)|(\s+coronary artery disease)|(\s+heart attack) |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5
✗ A2. Did you discuss any potential risks of your work?
Section 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
End of the abstract and end of the introduction (section 1)
✓ A4. Have you used AI writing assistants when working on this paper?
We used ChatGPT to propose suggestions for improving the grammar and phrasing of author generated writing. We used this parts of each section of the paper, but did not always use the suggestions generated by ChatGPT and we always modified the suggestions.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 2 (Methods) And Section 3 (Results)
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 2.5 (model training) and Table 5 in the appendix.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 2.4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3 (results)
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 2.3 (data labeling)
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 2.3 (masked contextual mention categories)
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Annotation was done by the authors of the paper.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Annotation was done by the authors of the paper.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Section 2.1 (dataset)
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Annotation was done by the authors of the paper. |
monajatipoor-etal-2023-metavl | {M}eta{VL}: Transferring In-Context Learning Ability From Language Models to Vision-Language Models | https://aclanthology.org/2023.acl-short.43 | Large-scale language models have shown the ability to adapt to a new task via conditioning on a few demonstrations (i.e., in-context learning). However, in the vision-language domain, most large-scale pre-trained vision-language (VL) models do not possess the ability to conduct in-context learning. How can we enable in-context learning for VL models? In this paper, we study an interesting hypothesis: can we transfer the in-context learning ability from the language domain to the VL domain? Specifically, we first meta-trains a language model to perform in-context learning on NLP tasks (as in MetaICL); then we transfer this model to perform VL tasks by attaching a visual encoder. Our experiments suggest that indeed in-context learning ability can be transferred cross modalities: our model considerably improves the in-context learning capability on VL tasks and can even compensate for the size of the model significantly. On VQA, OK-VQA, and GQA, our method could outperform the baseline model while having {\textasciitilde}20 times fewer parameters. | # Metavl: Transferring In-Context Learning Ability From Language Models To Vision-Language Models
Masoud Monajatipoor UCLA
[email protected] Liunian Harold Li ∗
UCLA
[email protected] Mozhdeh Rouhsedaghat *
USC
[email protected] Lin F. Yang UCLA
[email protected]
## Abstract
Large-scale language models have shown the ability to adapt to a new task via conditioning on a few demonstrations (i.e., in-context learning). Large-scale language models have shown the ability to adapt to a new task via conditioning on a few demonstrations (i.e., in-context learning). However, in the vision-language domain, most large-scale pre-trained visionlanguage (VL) models do not possess the ability to conduct in-context learning. How can we enable in-context learning for VL models?
In this paper, we study an interesting hypothesis: can we transfer the in-context learning ability from the language domain to the VL
domain? Specifically, we first meta-trains a language model to perform in-context learning on NLP tasks (as in MetaICL); then we transfer this model to perform VL tasks by attaching a visual encoder. Our experiments suggest that indeed in-context learning ability can be transferred cross modalities: our model considerably improves the in-context learning capability on VL tasks and can even compensate for the size of the model significantly. On VQA, OK-VQA,
and GQA, our method could outperform the baseline model while having ∼20 times fewer parameters.
## 1 Introduction
Pre-trained language models have shown impressive performance on a range of tasks by learning from large-scale text corpus (Radford et al.,
2018, 2019; Yang et al., 2019). Recent studies find that some of these language models can be used to perform *in-context learning* out-of-the-box, i.e., adapting to a task by conditioning on a few demonstrations in context without any gradient update (Brown et al., 2020; Min et al., 2022), which is highly desirable.
In VL modeling, in-context learning is less explored and only a handful of models are proposed
∗equal contribution
## Kai-Wei Chang
UCLA
[email protected] to perform in-context learning mainly by limiting the amount of deviation of a pretrained largescale language model from the language space and translating visual inputs to language embedding space. They either require a large capacity (Tsimpoukelli et al., 2021; Alayrac et al., 2022) or a giant corpus consisting of in-context learning examples
(Alayrac et al., 2022; Liu et al., 2023; Koh et al.,
2023).
In this work, we explore whether we could enable in-context learning in VL tasks without resorting to extreme scale-up. We study an interesting hypothesis: can we transfer the in-context learning ability from the language domain to the VL
domain? To elaborate, not every language model exhibits excellent *in-context* learning ability; recent studies (Min et al., 2022) show that one could explicitly train language models to perform in-context learning, by training the model on multiple tasks with in-context few-shot examples, a process that resembles meta-learning. Thus, an intriguing query arises: when a language model is first meta-trained to perform in-context learning, can it be transferred to perform in-context learning for VL tasks better?
A remarkable observation in our study is the utilization of a meta-trained language model as the transformer encoder-decoder and the mapping of visual features to the language embedding space.
This innovative approach led to the development of our proposed VL model (we name it MetaVL).
Impressively, our experimental results demonstrate that MetaVL surpasses the baseline model's performance, even when MetaVL is designed to be 20 times smaller in size.
This study makes three main contributions: 1)
To the best of our knowledge, this is the first attempt to transfer the meta-learning knowledge for in-context learning from single-modality to multimodality. 2) We propose a VL model, MetaVL1, which outperforms the baseline in in-context learn-1https://github.com/masoud-monajati/MetaVL
495 ing while having a much smaller model size. 3)
Through extensive experiments on VQA, GQA and OK-VQA, we demonstrate the in-context learning capability of MetaVL and analyze its components.
## 2 Related Work
In-context learning in VL. Frozen (Tsimpoukelli et al., 2021) is the first attempt for incontext learning in multimodality by leveraging a frozen GPT-like language model as the language backbone and mapping visual features to the language embedding space. Frozen sheds light on the feasibility of benefiting from the frozen LMs in VL modeling to learn a new task from a few examples in context. MAGMA (Eichenberg et al., 2021) is another encoder-decoder architecture for VL pretraining which showed that adding adaptor blocks between the frozen language model layers could further improve the performance for VL tasks in a few-shot scenario.
Other recent works (Yang et al., 2022; Alayrac et al., 2022; Zeng et al., 2022) follow the similar principle as the previous works to tackle in-context learning in VL modeling and achieve superior results by leveraging extremely large-scale models.
In this paper, we study a problem overlooked in prior work: we delve into the possibility of enabling in-context learning for VL tasks without relying on extensive scalability. Our focus lies in exploring the hypothesis: Is it feasible to transfer the in-context learning capability from the language domain to the VL domain?
Meta-learning in language modeling Largescale language models have shown the capability to be trained on a new task if properly prompted with in-context examples, i.e., in-context learning. In this learning strategy, the language model is asked to generate the desired output, e.g., an answer in the question-answering task, which is prompted by a few data examples along with their corresponding supervision sampled from the training split, and the language model learns the task in context without performing any gradient updates.
Although such training is highly data-efficient, its performance is far behind supervised fine-tuning.
Therefore, inspired by (Vilalta and Drissi, 2002; Evgeniou and Pontil, 2004; Finn et al., 2017; Ruder, 2017), MetaICL (Min et al., 2022) proposes training the model for in-context learning as a kind of meta-learning. MetaICL meta-trained a gpt language model on a diverse set of natural language
![1_image_0.png](1_image_0.png)
tasks and datasets and showed that meta-training a language model in an in-context learning manner could significantly improve the in-context learning capability of the language model for a new task.
## 3 Approach
In this section, we first explain the existing metatraining procedure for language modeling and then introduce our proposed method for in-context learning in VL.
Meta-training in language modeling. MetaICL
has shown that a language model that is metatrained on a diverse set of tasks in an in-context learning setup is a strong few-shot learner. To metatrain an auto-regressive language model, in each iteration, a meta-learning task is randomly chosen from a collection of diverse meta-training language tasks, and k + 1 data-label examples are randomly sampled from its training split. Then, the model is supervised by the concatenation of
(x1, y1, x2, y2*, ..., x*k+1) which will be fed as a single input to the model for predicting the label (yk+1) as the training objective, i.e., the metatraining step aims to maximize:
$$P(y_{k+1}|x_{1},y_{1},\cdot\cdot\cdot,x_{k},y_{k},x_{k+1})$$
During inference, the same in-context setup (k examples from the training) are sampled from a target dataset to be used as the (x1, y1)(x2, y2) ·
··,(xk, yk)(x) and given to the model to predict the label y.
The meta-trained language model trained on a diverse set of natural language datasets has shown good performance for an unseen task when few data are given in context (Min et al., 2022).
MetaVL - a VL method with meta-learning knowledge for in-context learning. MetaVL has three main submodels including a meta-trained encoder-decoder and is being trained using Prefix Language Modeling (PrefixLM) (Wang et al.,
2021). In the following, we discuss each submodel in detail.
Visual encoder and visual prefix. The visual encoder is defined as a function Ve(x) that takes an image of x and outputs visual features. We extract the feature grid before the pooling layer n × Dv where n is the number of feature maps and Dv is the feature size of the visual encoder. Then, the output features can be viewed as a sequence of n visual tokens representing the image.
The visual encoder is followed by the visual prefix module that is defined as Vp(x) ∈ Dv × Dl which maps the visual features to language embedding space. This module is seeking to properly project the visual tokens into language tokens.
During the VL training, the parameters of both of these modules are trainable and are learned with different learning rates by back-propagation guided by the frozen language model.
Language encoder-decoder The meta-trained language encoder-decoder is used as the LM backbone and is frozen during the VL training process so the meta-trained language model preserves its few-shot capabilities. The language encoder encodes the text into text tokens represented by t1, t2*, ..., t*m. Then, given the multimodal tokens
(image and text) as U = v1, v2, ..., vn, t1, t2*, ..., t*m the decoder is trained to reconstruct the corresponding text with a standard language modeling objective to maximize the following likelihood:
$$L(U)=\sum_{i=1}^{m}\log P(t_{i}|v_{1},...,v_{n},t_{1},...t_{i-1};\theta){\mathrm{~}}(2)$$
After the VL training, for learning a new VL task in-context, given a few examples from a new task with a new format, we concatenate k sampled datalabel pairs from the training split along with one data from the val/test split to construct the prompt and feed it to the model for predicting the desired output. The entire process is visualized in Fig. 1.
## 4 Experiments 4.1 Datasets And Baseline
We use the dataset proposed in (Min et al., 2022)
as the meta-training dataset for the language model and the COCO dataset (Lin et al., 2014) as the VL
training dataset for MetaVL. The evaluation experiments are conducted on three datasets including VQA (Antol et al., 2015), OK-VQA (Marino et al.,
2019), and GQA (Hudson and Manning, 2019).
Frozen leveraged an internal GPT-like language model with 7 billion parameters as the backbone of their proposed model. As their model is not publicly available, we trained Frozen with GPT2-
Medium as the frozen language model and consider it as our main baseline (FrozenA) due to its model size. We also train a frozen with GPT-J 6B (The most similar GPT to Frozen) language model and obtained a close performance to the original Frozen model and use it as our second baseline denoted by FrozenB.
## 4.2 Training And Evaluation Setting
Initially, We meta-train a GPT2-Medium LM on a collection of 142 meta-training language datasets with a learning rate of 1e-5 and a batch size of 8 using the setting named as "HR→LR with instructions (all)" where datasets with equal or greater than 10,000 training examples are used as metatraining tasks and the rest of the datasets are used as target tasks. The training is done on 8 NVIDIA
RTX A6000 for 80,000 steps which took ∼ 6 hours.
Then, we train MetaVL on the training split of COCO where we use a learning rate of 5e-5 and 2e-6 for the visual prefix and visual encoder, respectively, while the rest of the model parameters are frozen. We use a batch size of 32 and trained MetaVL using 4 NVIDIA RTX A6000 for 8 epochs which take ∼ 48 hours. Inference time depends on the numebr of shots varies from 2-5 hours for 0-3 shots on 5000 test examples. Our visual encoder is CLIP-RN50x16 (Radford et al., 2021) with a feature grid size of 144 × 3072 and our visual prefix is an MLP layer with a dimension of 3072 × 768.
For in-context evaluation on VQA datasets, we randomly pick a specific number -n- of sampled data-label pairs, known as shots, from the training set and feed them to the model in-context followed by a single data from the val/test set. Fig. 2 provides some illustrative examples for the evaluation process.
![3_image_0.png](3_image_0.png)
To conduct the evaluation, we utilize a subset of 5,000 instances from the val/test dataset due to computational constraints. The generated output from the model is then compared against the expected answer, as established in previous studies.
In cases where an exact match is not achieved, we employ a technique to identify the most closely related answer from a set of candidate answers (The set can be defined as a unique list of all answers in the training dataset). This involves computing the cosine similarity between the output's embedding and each candidate answer's embedding achieved by Sentence BERT (Reimers and Gurevych, 2019).
We then compare the selected output with the corresponding answer to determine the match. The training datasets for VQA, OK-VQA, and GQA
contain approximately 3,000, 4,200, and 3,000 distinct answers, respectively. Furthermore, we performed an additional round of human evaluation on model's output without matching, and the findings are summarized in the appendix (Table 2). The human evaluation on a separate test set of 2000 examples aimed to delve deeper into instances where the model's output, while accurate, didn't precisely match the provided answer. Three such examples are presented in Fig 3, where the initial evaluation did not consider the prediction as correct, but it was deemed correct in the subsequent evaluation
| FrozenA | FrozenB | MetaVL | | |
|----------------------|-----------|----------|-------|-------|
| LM size | 375M | 7B | 375M | |
| Automatic evaluation | VQA | 18.63 | 34.07 | 33.12 |
| OK-VQA | 3.17 | 11.97 | 9.60 | |
| GQA | 13.86 | 25.76 | 31.96 | |
| Human evaluation | VQA | 16.68 | - | 35.09 |
| OK-VQA | 6.41 | - | 19.22 | |
| GQA | 19.96 | - | 38.29 | |
![3_image_1.png](3_image_1.png)
## 4.3 Results And Analysis
Quantitative analysis To evaluate MetaVL, we consider three common visual question-answering datasets including VQA, OK-VQA, and GQA. We compare MetaVL results with the mentioned two baselines in Table 1 for 3-shot in-context learning based on both automatic and human evaluation. According to the results, the performance of Frozen improves as its model size increases while MetaVL
achieved competitive results in all three tasks. To further analyze how many image-text pairs are required to enable In-context learning for the VL
task, we have trained MetaVl with 50 percent of training data and the results show that the performance slightly dropped but the model preserve its capability to learn from in-context data (Table 3).
The effect of the number of in-context shots According to Figure 4, in almost all settings, the performance of MetaVL is improving by increasing the number of shots which shows the model is gaining knowledge from the data in context. This result further gives us an illustration of the model's ca-
![4_image_0.png](4_image_0.png)
pability to learn from the in-context examples supporting that MetaVL is benefiting from the metalearning knowledge for in-context learning. The numbers on the graph are summarized in Table 2 in the appendix.
The effect of having adaptor layers in LM
MAGMA claims that adding trainable adaptor layers and letting the LM slightly be trained during the VL training process is beneficial for in-context learning. Compared with Frozen, in addition to being trained on an x8 larger set of VL datasets, MAGMA also includes the training splits of the target datasets to its training set, while Frozen is adapted to an unseen new task in-context (incontext learning). We evaluated this method by adding adaptor layers to both Frozen and MetaVL
and denoted the corresponding models by Frozen w/adap and MetaVL w/adap, respectively, in Fig. 4. Our results demonstrate that having a fully frozen language model in MetaVL could better preserve the in-context learning ability of the language model. It is also noticeable that adding adaptor layers improves the zero-shot performance of Frozen.
We hypothesize that this improvement is due to getting a better vision and language alignment by letting both vision and language submodels be involved in the alignment process.
Qualitative analysis We provide some qualitative examples to better illustrate the performance of MetaVL for in-context learning in different VQA tasks. In Fig. 2, a few examples are provided which show the output of MetaVL for 3-shot incontext learning. More examples are presented in Appendix.
## 5 Conclusion
We investigate the feasibility of transferring metalearning knowledge for in-context learning from resource-rich single modality to multimodality. We have shown that by leveraging a meta-trained language model in a VL model, we can transfer the ability of "learning to learn" in context to VL and it results in a strong VL few-shot leaner. With extensive experiments on three common VL datasets, we have shown that the in-context learning performance of MetaVL is superior compared with the baseline even when the size of our model is 20 times smaller.
## 6 Acknowledgment
This work was supported by DARPA under agreement HR00112190130 and DARPA MSC program under agreement N660011924032. We would like to thank the reviewers for their feedback to improve this research work.
## Limitations
While we have shown the potential of transferring in-context learning ability from a language model to VL tasks, the experiments in this paper are limited in two aspects. (1) We considered only the VQA task, which is limited in scope. It is unclear whether our method generalizes to other VL tasks.
In fact, as most tasks in the VL domain take the form of visual question answering, it is less welldefined what would "cross-task generalization" entail in VL, compared to in NLP where (2) Due to computational limitations, we experiment with only a moderate-sized LM. It is unclear the performance of our method after scaling up.
## References
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. *arXiv preprint arXiv:2204.14198*.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, and Anette Frank. 2021.
Magma–multimodal augmentation of generative models through adapter-based finetuning. *arXiv* preprint arXiv:2112.05253.
Theodoros Evgeniou and Massimiliano Pontil. 2004.
Regularized multi–task learning. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 109–
117.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017.
Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pages 1126–1135. PMLR.
Drew A Hudson and Christopher D Manning. 2019.
Gqa: A new dataset for real-world visual reasoning and compositional question answering. Conference on Computer Vision and Pattern Recognition
(CVPR).
Jing Yu Koh, Ruslan Salakhutdinov, and Daniel Fried. 2023. Grounding language models to images for multimodal generation. *arXiv preprint* arXiv:2301.13823.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco:
Common objects in context. In *European conference on computer vision*, pages 740–755. Springer.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. *arXiv preprint* arXiv:2304.08485.
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195–3204.
Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2022. MetaICL: Learning to learn in context. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2791–2809, Seattle, United States.
Association for Computational Linguistics.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763.
PMLR.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. *arXiv preprint* arXiv:1706.05098.
Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. *Advances in Neural Information Processing Systems*, 34:200–212.
Ricardo Vilalta and Youssef Drissi. 2002. A perspective view and survey of meta-learning. *Artificial intelligence review*, 18(2):77–95.
Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021. Simvlm: Simple visual language model pretraining with weak supervision. *arXiv preprint arXiv:2108.10904*.
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. 2022.
An empirical study of gpt-3 for few-shot knowledgebased vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 3081–
3089.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information processing systems*, 32.
Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. 2022. Socratic models: Composing zero-shot multimodal reasoning with language. *arXiv preprint arXiv:2204.00598*.
## A Appendix
![7_image_0.png](7_image_0.png) ![8_image_0.png](8_image_0.png) ![9_image_0.png](9_image_0.png)
![10_image_0.png](10_image_0.png)
| model | FrozenA w/ adap | FrozenA | MetaVL w/ adap | MetaVL | | | | | | | | | | | | |
|----------------------|-------------------|-----------|------------------|----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| n-shot | 0 | 1 | 2 | 3 | 0 | 1 | 2 | 3 | 0 | 1 | 2 | 3 | 0 | 1 | 2 | 3 |
| Automatic evaluation | | | | | | | | | | | | | | | | |
| VQA | 28.72 | 18.98 | 14.23 | 7.60 | 12.94 | 14.92 | 18.11 | 18.63 | 31.98 | 30.03 | 30.01 | 29.96 | 31.6 | 32.01 | 32.89 | 33.12 |
| OK-VQA | 7.36 | 6.30 | 3.98 | 2.34 | 2.91 | 3.02 | 4.04 | 3.30 | 10.94 | 9.97 | 10.32 | 10.92 | 9.58 | 9.30 | 9.55 | 9.60 |
| GQA | 22.62 | 15.44 | 12.96 | 6.54 | 8.80 | 10.81 | 12.17 | 13.86 | 29.12 | 28.31 | 27.78 | 26.74 | 30.10 | 30.05 | 31.32 | 31.96 |
| Human evaluation | | | | | | | | | | | | | | | | |
| VQA | 25.49 | 15.66 | 16.70 | 11.53 | 8.79 | 13.62 | 15.31 | 16.68 | 28.20 | 26.61 | 26.12 | 26.01 | 30.24 | 31.33 | 33.89 | 35.09 |
| OK-VQA | 6.70 | 6.04 | 3.88 | 2.56 | 4.67 | 4.71 | 4.94 | 6.41 | 14.67 | 9.97 | 9.01 | 9.24 | 14.72 | 13.95 | 17.95 | 19.22 |
| GQA | 30.01 | 14.72 | 8.92 | 5.59 | 6.18 | 15.85 | 19.07 | 19.96 | 33.74 | 32.09 | 31.81 | 31.58 | 35.08 | 37.65 | 38.03 | 38.29 |
Table 2: Accuracy of MetaVL and Frozen, w/ and w/o adaptors with 0-3 shots of in-context data.
| MetaVL | MetaVL50% | | |
|----------------------|-------------|-------|-------|
| VQA | 33.12 | 30.32 | |
| Automatic evaluation | OK-VQA | 9.60 | 7.56 |
| GQA | 31.96 | 27.77 | |
| VQA | 35.09 | 34.02 | |
| Human evaluation | OK-VQA | 19.22 | 18.19 |
| GQA | 38.29 | 35.66 | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 4.2
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
valentini-etal-2023-interpretability | On the Interpretability and Significance of Bias Metrics in Texts: a {PMI}-based Approach | https://aclanthology.org/2023.acl-short.44 | In recent years, word embeddings have been widely used to measure biases in texts. Even if they have proven to be effective in detecting a wide variety of biases, metrics based on word embeddings lack transparency and interpretability. We analyze an alternative PMI-based metric to quantify biases in texts. It can be expressed as a function of conditional probabilities, which provides a simple interpretation in terms of word co-occurrences. We also prove that it can be approximated by an odds ratio, which allows estimating confidence intervals and statistical significance of textual biases. This approach produces similar results to metrics based on word embeddings when capturing gender gaps of the real world embedded in large corpora. | # On The Interpretability And Significance Of Bias Metrics In Texts: A Pmi-Based Approach
Francisco Valentini1,2 Germán Rosati3 Damián Blasi4 **Diego Fernandez Slezak**1,5 Edgar Altszyler1,2 1Instituto de Investigación en Ciencias de la Computación, CONICET-UBA, Argentina 2Maestría en Data Mining, Universidad de Buenos Aires (UBA), Argentina 3CONICET. Escuela IDAES, Universidad Nacional de San Martín, Argentina 4Harvard University, USA
5Departamento de Computación, FCEyN, UBA, Argentina [email protected], [email protected], [email protected], [email protected], [email protected]
## Abstract
In recent years, word embeddings have been widely used to measure biases in texts. Even if they have proven to be effective in detecting a wide variety of biases, metrics based on word embeddings lack transparency and interpretability. We analyze an alternative PMIbased metric to quantify biases in texts. It can be expressed as a function of conditional probabilities, which provides a simple interpretation in terms of word co-occurrences. We also prove that it can be approximated by an odds ratio, which allows estimating confidence intervals and statistical significance of textual biases.
This approach produces similar results to metrics based on word embeddings when capturing gender gaps of the real world embedded in large corpora.1
## 1 Introduction
Word embedding-based approaches have been used for detecting and quantifying gender, ethnic, racial, and other stereotypes present in corpora. While some research has focused on investigating biases by training embeddings on a specific corpus of interest (Garg et al., 2018; Kozlowski et al.,
2019; Lewis and Lupyan, 2020; Charlesworth et al.,
2021), others have employed pretrained word embeddings to assess potential biases inherent in the training corpus (Caliskan et al., 2017; Garg et al.,
2018; DeFranza et al., 2020; Jones et al., 2020).
Though not as popular, Pointwise Mutual Information (PMI) is a measure of word similarity which has also been used to study biases (Gálvez et al.,
2019; Bordia and Bowman, 2019; Aka et al., 2021).
1Code for the paper is available at https://github.
com/ftvalentini/BiasPMI
However, the statistical properties and advantages of this measure as compared to the widely used word embeddings have not been studied yet.
In this article we study a PMI-based metric to measure bias in corpora and explain its statistical and interpretability benefits, which have been overlooked until now. Our contributions are as follows:
(1) We show the PMI-based bias metric can be approximated by an odds ratio, which makes computationally inexpensive and meaningful statistical inference possible. (2) We provide evidence that methods based on GloVe, skip-gram with negative sampling (SGNS) and PMI produce comparable results when the biases measured in large corpora are compared to empirical information about the world. (3) We contend that the PMI-based bias metric is substantially more transparent and interpretable than the embedding-based metrics.
Scope: The detection and mitigation of bias in models is a research topic that is beyond the scope of this paper. Our paper's contribution focuses on the measurement of bias in raw corpora (not models), which is a relevant task in Computational Social Science.
## 2 Background
Consider two sets of context words A and B, and a set of target words C. Textual bias measures quantify how much more the words of C are associated with the words of A than with those of B. Most metrics can be expressed as a difference between the similarities between A and C, on the one hand, and B and C, on the other:
Bias = sim($A,C$) - sim($B,C$) (1)
For instance, to estimate the female vs. male gender bias of occupations, context words are often gendered pronouns or nouns, e.g., *A = {she, her,*
woman,..} and *B = {he, him, man,...}*; whereas C is usually considered one word at a time, estimating for each specific job (nurse, doctor, *engineer*, etc.)
the relative association to A and B.
One particularly popular metric which uses word embeddings (WE) is that of Caliskan et al. (2017):
$$\begin{array}{l}\mbox{\rm Bias}_{\rm WE}=\frac{\mbox{mean}\cos(v_{a},v_{c})-\mbox{mean}\cos(v_{b},v_{c})}{\mbox{\rm std}\mbox{\rm dev}\cos(v_{x},v_{c})}\\ \mbox{\rm std}\mbox{\rm\small\raisebox{-1.0pt}{\mbox{\rm\small$x\in A\cup B$}}}\end{array}\tag{2}$$
where vi stands for the word embedding of word i and cos(vi, vj ) is the cosine similarity between vectors.
Permutations tests that shuffle context words have been used to calculate the statistical significance of BiasWE (Caliskan et al., 2017; Charlesworth et al., 2021). These tests permute the words from A and B repeatedly and compute the bias metric in each iteration to simulate a null distribution of bias. The two-tailed p-value is calculated as the fraction of times the absolute value of bias from the null distribution is equal to or greater than the one observed (North et al., 2002).
With a similar re-sampling approach, bootstrap can also be performed (Garg et al., 2018). The bootstrap distribution is obtained by calculating the bias metric over many bootstrap samples from A
and B, sampled separately for each group. The standard error of bias is then estimated as the sample standard deviation of the bootstrap distribution, and the quantiles of the distribution are used to obtain percentile confidence intervals (Davison and Hinkley, 1997).
## 3 Bias Measurement With Pmi
Here we introduce a bias metric that follows equation 1 but uses Pointwise Mutual Information (PMI)
(Church and Hanks, 1990) as a measure of word similarity:
$${\mathrm{Bias}}_{\mathrm{PMI}}={\mathrm{PMI}}(A,C)-{\mathrm{PMI}}(B,C)\quad(3)$$
PMI measures the first-order association between two lists of words X and Y :
$$\mathrm{PMI}(X,Y)=\log\,\frac{P(X,Y)}{P(X)P(Y)}=\log\,\frac{P(Y|X)}{P(Y)},\tag{4}$$ where $P(X,Y)$ is the probability of co-occurrence.
between any word in X with any one in Y in a
window of words, and P(X) and P(Y ) are the probability of occurrence of any word in X and any word in Y , respectively. Equation 4 shows PMI can be expressed as the ratio between the probability of words in Y co-occurring with words in X, and the probability of words in Y appearing in any context.
## 3.1 Approximation Of The Pmi-Based Bias By Log Odds Ratio
Combining equations 3 and 4, the PMI-based bias can be written as a ratio of conditional probabilities, which can be estimated via maximum likelihood using the co-occurrence counts from the corpus:
$${\mathrm{Bias}}_{\mathrm{PMI}}=\log{\frac{P(C|A)}{P(C|B)}}=\log{\frac{{\frac{f_{A,C}}{f_{A,C}+f_{A,nC}}}}{{\frac{f_{B,C}}{f_{B,C}+f_{B,nC}}}}},\tag{5}$$
where fA,C and fB,C represent the number of times words in C appear in the context of words in A and B, respectively, and f*A,nC* and f*B,nC* represent how many times words not in C appear in the context of A and B, respectively. See contingency table in Appendix A for reference.
BiasPMI is not computable if fA,C = 0 or fB,C = 0. We address this by adding a small value ϵ to all co-occurrences in the corpus (Jurafsky and Martin, 2009).
For most practical applications, co-occurrences between words not in a group (most of the vocabulary) and a group of specific words are larger than the co-occurrences between two groups of specific words. More precisely:
Thus:
$$f_{B,n C}\gg f_{B,C},\;f_{A,n C}\gg f_{A,C}.\qquad\quad(6)$$
$${\mathrm{Bias}}_{\mathrm{PMI}}\approx\log{\frac{{\frac{f_{A,C}}{f_{A,n C}}}}{{\frac{f_{B,C}}{f_{B,n C}}}}}\approx\log{\mathrm{\bf~OR}},\qquad(7)$$
where OR is the odds ratio. Therefore, parametric confidence intervals and hypothesis testing can be conducted for BiasPMI (details in Appendix B).
## 4 Experiments
To compare BiasPMI with BiasWE we replicate three experiments that compare the gender biases measured in texts with the ones from other datasets:
1. *Occupations-gender* (Caliskan et al., 2017):
gender bias in text is compared to the percentage of women employed in a list of occupations in the U.S. Bureau of Labor Statistics in 2015.
2. *Names-gender* (Caliskan et al., 2017): for a list of androgynous names, gender bias in text is compared to the percentage of people with each name who are women in the 1990 U.S. census.
3. *Norms-gender* (Lewis and Lupyan, 2020): textual gender bias is compared to the Glasgow Norms, a set of ratings for 5,500 English words which summarize the answers of participants who were asked to rate the gender association of each word (Scott et al., 2019).
Details about these datasets are in Appendix C.
We train GloVe, SGNS and PMI on two corpora: the 2014 English Wikipedia and English subtitles from OpenSubtitles (Lison and Tiedemann, 2016). We pre-process both corpora by converting all text to lowercase, removing non alpha-numeric symbols and applying sentence splitting, so that one sentence equates to one document. After preprocessing, the Wikipedia corpus is made up of 1.2 billion tokens and 53.9 million documents, whereas the OpenSubtitles corpus contains 2.4 billion tokens and 447.9 million documents. Refer to Appendix D for additional details about each corpus and to Appendix E for implementation details.
For each of the three settings, we assess the correlation between the dataset's female metric and the female bias as measured by PMI (equation 5),
and SGNS and GloVe (equation 2). Female bias refers to the bias metrics where A and B represent lists of female and male words, respectively.2 Positive values imply that the target word is more associated with female terms than with male ones.
We measure correlation with Pearson's r. We also compute a weighted Pearson's r, which takes into account the standard error of each bias estimate and reduces the influence of noisy estimates on the correlation. Finally, for each word in each experiment we compute confidence intervals and p-values for the null hypothesis of absence of bias.3 The aim of these experiments is not to find which method produces greater correlations in each task; it is rather to check whether BiasPMI produces similar results to the widely used BiasWE. If it does, it means our metric can extract trends from large corpora that correlate with gender stereotypes at least as well as embedding-based metrics can.
2A=*{female, woman, girl, sister, she, her, hers, daughter}*
and B=*{male, man, boy, brother, he, him, his, son}* (Caliskan et al., 2017; Lewis and Lupyan, 2020).
3In the case of BiasWE, we apply bootstrap with 2,000 iterations and permutations with the all the possible combinations.
## 5 Results
Table 1 shows Pearson's r weighted and unweighted coefficients for each of the eighteen experiments (three association tests in two corpora with three bias measures each). The scatter plots associated with the Wikipedia's coefficients are available in Appendix F.1.
All in all, BiasPMI and BiasWE yield comparable results in these settings. There is no single method which consistently has the largest or lowest correlations.
Weights tend to either increase the correlation considerably or to make it slightly weaker. This implies that in these experiments, noisy textual bias estimates usually agree less with the gender bias in the validation datasets. However, this does not mean that for each individual bias estimate the standard errors of each method are mutually interchangeable or equally useful (see section 6.2).
![2_image_0.png](2_image_0.png)
In Figure 1 we compare the p-values of the permutation test of BiasWE with SGNS, with the pvalues of the log odds ratio test of BiasPMI for the *Names-gender* test conducted in Wikipedia. A
Benjamini-Hochberg correction was applied to the p-values obtained by both methods to account for multiple comparisons (Benjamini and Hochberg, 1995). Appendix F.2 shows this example is consistent with the rest of the experiments.
In this example, only the word with the highest
| Corpus | Experiment | Correlation | PMI | GloVe | SGNS |
|----------------|--------------------|---------------|-------|---------|--------|
| Glasgow-Gender | r | 0.58 | 0.49 | 0.55 | |
| Weighted r | 0.58 | 0.69 | 0.72 | | |
| Names-Gender | r | 0.80 | 0.74 | 0.81 | |
| Weighted r | 0.84 | 0.82 | 0.77 | | |
| OpenSubtitles | Occupations-Gender | r | 0.66 | 0.67 | 0.79 |
| Weighted r | 0.81 | 0.83 | 0.89 | | |
| Glasgow-Gender | r | 0.50 | 0.44 | 0.50 | |
| Weighted r | 0.44 | 0.59 | 0.66 | | |
| Names-Gender | r | 0.78 | 0.74 | 0.77 | |
| Weighted r | 0.75 | 0.79 | 0.76 | | |
| Wikipedia | Occupations-Gender | r | 0.69 | 0.70 | 0.70 |
| Weighted r | 0.79 | 0.67 | 0.78 | | |
BiasWE is significantly different from zero at a 0.10 significance level. In contrast, most words have a BiasPMI significantly different from zero, with the exception of some points with bias values close to zero. This is because the procedures that compute p-values for each type of metric capture essentially different types of variability (see section 6.2).
## 6 Discussion 6.1 Interpretability
Although there are studies on how word vector spaces are formed (Levy and Goldberg, 2014; Levy et al., 2015; Ethayarajh et al., 2019) and on the biases they encode (Bolukbasi et al., 2016; Zhao et al., 2017; Gonen and Goldberg, 2019), there is no transparent interpretation of the embedding-based bias metrics in terms of co-occurrences of words in the texts.
In contrast, BiasPMI can be expressed intrinsically in terms of conditional probabilities (equation 5). The bias is interpreted as the logarithm of how much more likely it is to find words in C in the context of words in A than in the context of words in B. For example, in the Wikipedia corpus the female BiasPMI of word *nurse* is 1.3172, thus,
$${\frac{P(n u r s e|A)}{P(n u r s e|B)}}=\exp\,1.3172=3.7330.$$
This means that it is 273.30% more likely to find the word *nurse* in the context of female words (A)
than in the context of male words (B).
To the lack of interpretability of BiasWE contributes the fact that SGNS and GloVe can capture word associations of second order or higher
(Altszyler et al., 2018; Schlechtweg et al., 2019),
whereas PMI is strictly a first-order association metric. When embeddings are used to measure biases, it is not possible to tell whether the results are due to widespread first-order co-occurrences or are derived from obscure higher-order co-occurrences
(Brunet et al., 2019; Rekabsaz et al., 2021).
For instance, in OpenSubtitles, the BiasPMI of the word *evil* equals −0.25, indicating a higher likelihood of appearing in the context of male context words (B) compared to female ones (A). Conversely, BiasSGNS = 0.23. Even if this stands for female bias, it is difficult to understand the exact source of this result since it is influenced by second and higher-order co-occurrences. Moreover, in recent research we demonstrated that BiasWE
can also yield misleading results by inadvertently capturing disparities in the frequencies of context words (Valentini et al., 2022).
Nevertheless, bias metrics that capture secondorder associations have the advantage of managing data sparsity. Since word embeddings can capture synonymy, when data is sparse it might not be necessary to include all related words to the concepts of interest in order to measure meaningful biases.
In the case of our first-order metric, this problem must be addressed by increasing word lists with synonyms and forms of the words of interest.
To illustrate this, let's consider the case of the words *nourish* and *nurture*, which have different frequencies in the Wikipedia corpus (700 and 3, 000, respectively). With BiasPMI, we obtain a bias of 0.33 for *nurture* (p-value < 10−4). However, if we had used its less frequent synonym nourish instead, the BiasPMI would have been −0.10 and not statistically significant (p-value ≈ 0.66).
Here we would not have been able to determine whether there is actually no bias or if there is insufficient data. This shows that it is generally advisable to include all pertinent synonyms and variations of the term whose bias we are trying to measure.
## 6.2 Statistical Inference
The p-values, standard errors and confidence intervals of the log OR approximation are fundamentally different from the ones estimated for BiasWE
through permutations and bootstrap. The uncertainty quantified for BiasPMI captures the variability of the underlying data generating process i.e. the one induced by the randomness of cooccurrence counts as random quantities. In contrast, the estimates for BiasWE only consider the variability across the sets of context words. This means that multiple words *must* be chosen so that inference can be conducted. In fact, whenever A
and B are single-word lists, there is no way of estimating uncertainty for BiasWE with these methods, whereas it is perfectly feasible for BiasPMI.
As far as we know, we are the first to provide a simple and efficient way of evaluating the statistical significance of bias. This is especially important in Computational Social Science, for which it is useful to have not only a reliable metric to quantify stereotypes but also a reliable tool to measure uncertainty i.e. to know up to what degree the measured values might have been due to statistical fluctuation. Meaningful statistical tests and confidence intervals that capture the variability that really matters are therefore essential.
## 7 Conclusion
We presented a PMI-based metric to quantify biases in texts, which (a) allows for simple and computationally inexpensive statistical inference, (b)
has a simple interpretation in terms of word cooccurrences, and (c) is explicit and transparent in the associations that it is quantifying, since it captures exclusively first-order co-occurrences. Our method produces similar results to the GloVe-based and SGNS-based metrics in experiments which compare gender biases measured in large corpora to the gender gaps of independent empirical data.
## Limitations
We replicate three well-known experiments in the gender bias literature, where bias is measured according to a binary female vs. male view. This choice ignores other views of gender but eases the presentation of the frameworks.
We only use two corpora and three datasets which by no means capture the biases of all the people speaking or writing in the English language.
Moreover, we don't experiment with different corpus sizes, a more diversified set of corpora or more bias types. We hope to explore this in future work.
The hyperparameters of the models have not been varied, using their default values. This replicates the standard experimental setting used in the literature. Since there are no ground truths when measuring biases (that is, there are no annotations with the amount of bias of words in large corpora),
hyperparameters are usually set to their default values.
## References
Alan Agresti. 2003. *Categorical data analysis*, volume 482. John Wiley & Sons.
Osman Aka, Ken Burke, Alex Bauerle, Christina Greer, and Margaret Mitchell. 2021. Measuring model biases in the absence of ground truth. In *Proceedings* of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM.
Edgar Altszyler, Mariano Sigman, and Diego Fernández Slezak. 2018. Corpus specificity in LSA and word2vec: The role of out-of-domain documents. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 1–10, Melbourne, Australia. Association for Computational Linguistics.
Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. *Journal of the* Royal Statistical Society Series B (Methodological),
57(1):289–300.
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems, volume 29.
Curran Associates, Inc.
Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 7–15, Minneapolis, Minnesota. Association for Computational Linguistics.
Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2019. Understanding the origins of bias in word embeddings. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of* Machine Learning Research, pages 803–811. PMLR.
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases.
Science, 356(6334):183–186.
Tessa ES Charlesworth, Victor Yang, Thomas C Mann, Benedek Kurdi, and Mahzarin R Banaji. 2021. Gender stereotypes in natural language: Word embeddings show robust consistency across child and adult language corpora of more than 65 million words.
Psychological Science, 32(2):218–240.
Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. *Computational Linguistics*, 16(1):22–29.
A. C. Davison and D. V. Hinkley. 1997. Bootstrap Methods and their Application. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press.
David DeFranza, Himanshu Mishra, and Arul Mishra.
2020. How language shapes prejudice against women: An examination across 45 world languages. *Journal of Personality and Social Psychology*, 119(1):7–22.
Kawin Ethayarajh, David Duvenaud, and Graeme Hirst.
2019. Understanding undesirable word embedding associations. In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 1696–1705, Florence, Italy. Association for Computational Linguistics.
Ramiro H. Gálvez, Valeria Tiffenberg, and Edgar Altszyler. 2019. Half a century of stereotyping associations between gender and intellectual ability in films.
Sex Roles, 81(9):643–654.
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635– E3644.
Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them.
In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614, Minneapolis, Minnesota. Association for Computational Linguistics.
Jason J. Jones, Mohammad Ruhul Amin, Jessica Kim, and Steven Skiena. 2020. Stereotypical gender associations in language have decreased over time. *Sociological Science*, 7(1):1–35.
Daniel Jurafsky and James H. Martin. 2009. *Speech and* Language Processing (2nd Edition). Prentice-Hall, Inc., USA.
Austin C. Kozlowski, Matt Taddy, and James A. Evans.
2019. The geometry of culture: Analyzing the meanings of class through word embeddings. *American* Sociological Review, 84(5):905–949.
Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In *Advances in Neural Information Processing Systems*,
volume 27. Curran Associates, Inc.
Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. *Transactions of the Association for Computational Linguistics*, 3:211–225.
Molly Lewis and Gary Lupyan. 2020. Gender stereotypes are reflected in the distributional structure of 25 languages. *Nature Human Behaviour*, 4(10):1021–
1028.
Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In *Proceedings of the Tenth* International Conference on Language Resources and Evaluation (LREC'16), pages 923–929, Portorož, Slovenia. European Language Resources Association
(ELRA).
B. V. North, D. Curtis, and P. C. Sham. 2002. A note on the calculation of empirical p values from monte carlo procedures. *The American Journal of Human* Genetics, 71(2):439–441.
Jeffrey Pennington, Richard Socher, and Christopher D.
Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543.
Radim Reh˚u ˇ ˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In *Proceedings of the LREC 2010 Workshop on* New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/
publication/884893/en.
Navid Rekabsaz, Robert West, James Henderson, and Allan Hanbury. 2021. Measuring societal biases from text corpora with smoothed first-order co-occurrence.
Computing Research Repository, arXiv:1812.10424.
Dominik Schlechtweg, Cennet Oguz, and Sabine Schulte im Walde. 2019. Second-order cooccurrence sensitivity of skip-gram with negative sampling. In *Proceedings of the 2019 ACL Workshop* BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 24–30, Florence, Italy. Association for Computational Linguistics.
Graham G Scott, Anne Keitel, Marc Becirspahic, Bo Yao, and Sara C Sereno. 2019. The Glasgow Norms: Ratings of 5,500 words on nine scales. *Behavior Research Methods*, 51:1258–1270.
Francisco Valentini, Germán Rosati, Diego Fernandez Slezak, and Edgar Altszyler. 2022. The undesirable dependence on frequency of gender bias metrics based on word embeddings. In *Findings of the Association for Computational Linguistics: EMNLP 2022*.
Association for Computational Linguistics.
Jeroen van Paridon and Bill Thompson. 2021. subs2vec:
Word embeddings from subtitles in 55 languages.
Behavior Research Methods, 53(2):629–655.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In *Proceedings of the 2017* Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics.
## A Contingency Table Of Co-Occurrences
BiasPMI is computed with the co-occurrences between the groups of words A, B and C. These can be represented with the following contingency table:
![6_image_0.png](6_image_0.png)
Table 2: Contingency table of words co-occurrences This contains, for example, how many times words in A appear in the context of words in C
(fA,C) and how many times they do not (f*A,nC*).
## B Statistical Inference For Biaspmi
The distribution of the log odds ratio (equation 7)
converges to normality (Agresti, 2003). Its 95%
confidence interval is given by CI95%BiasPMI= BiasPMI ± 1.96 SE
with
$$\begin{array}{r c l}{{S E}}&{{=}}&{{\sqrt{\frac{1}{f_{A,C}}+\frac{1}{f_{B,C}}+\frac{1}{f_{A,n C}}+\frac{1}{f_{B,n C}}}}}\\ {{}}&{{}}&{{\approx}}&{{\sqrt{\frac{1}{f_{A,C}}+\frac{1}{f_{B,C}}}}}\end{array}$$
![6_image_1.png](6_image_1.png)
This last approximation considers condition 6.
We can test the null hypothesis that the log odds ratio is 0 (absence of bias) with a standard Z-test, whereby the two-sided p-value is computed with 2P(Z < −|BiasPMI|/SE), where Z is a standard normal random variable.
## C Datasets
For the *occupations-gender* and *names-gender* experiments, the female proportions for names and occupations in the U.S. were extracted from the datasets provided by Will Lowe's cbn R library 4, which contains tools for replicating Caliskan et al.
(2017). We used the 50 names and 44 occupations available in this source.
The original Glasgow Norms comprise 5,553 English words. Individuals from the University of Glasgow were asked to measure the degree to which each word is associated with male or female behavior on a scale from 1 (very feminine) to 7
(very masculine). Following Lewis and Lupyan
(2020), we average the norms of homonyms and compute 8 − *rating* to flip the scale so that it represents *femaleness* according to human judgement.
4,668 words from the original list overlapped with OpenSubtitle's vocabulary, and 4,642 words overlapped with the Wikipedia vocabulary.
## D Corpora
The Wikipedia corpus was built from the August 2014 dump, licensed under CC BY-SA 3.05. We removed articles with less than 50 tokens.
The OpenSubtitles corpus (Lison and Tiedemann, 2016) includes English subtitles from movies and TV shows and was built with the aid of the subs2vec Python package with MIT License
(van Paridon and Thompson, 2021).
## E Model Training
We ignore words with less than 100 occurrences, resulting in a vocabulary of 172,748 words for Wikipedia and 128,974 words for OpenSubtitles.
We use a window size of 10 in all models and apply "dirty" subsampling i.e. out-of-vocabulary tokens are removed before the corpus is processed into word-context pairs (Levy et al., 2015).
Word embeddings with 300 dimensions are trained with SGNS and GloVe. For SGNS we use the word2vec implementation of Gensim 4.1.2 licensed under GNU LGPLv2.1 (Reh˚u ˇ ˇrek and Sojka, 2010) with default hyperparameters. GloVe is trained with the original implementation (Pennington et al., 2014) with version 1.2 (Apache License, Version 2.0) with 100 iterations. This version uses 4https://conjugateprior.github.io/cbn/
5https://archive.org/download/
enwiki-20141208 by default additive word representations, in which each word embedding is the sum of its corresponding context and word vectors.
For PMI, we count co-occurrences with the GloVe module (Pennington et al., 2014) with version 1.2 and set the smoothing parameter ϵ to 0.5.
We ran all experiments on a desktop machine with 4 cores Intel Core i5-4460 CPU and 32 GB
RAM. Training times were around 1 hour per epoch with SGNS and 5 minutes per iteration with GloVe.
Co-occurrence counts used for PMI were obtained in around 20 minutes with GloVe.
## F Results F.1 Experiments
In Figures 2, 3 and 4 we display the scatter plots of the three experiments described in section 4 for the Wikipedia corpus. The findings for OpenSubtitles are qualitatively the same and we exclude the plots for simplicity.
The vertical axes represent the female vs. masculine bias measures based on PMI (left panels),
GloVe (middle panels), and SGNS (right panels).
Dashed lines represent linear regressions. In the second row, the bias standard error was taken into account as weights in the regression, and error bars are confidence intervals.
All unweighted and weighted correlation coefficients in Table 1 are significantly different from zero at the 0.0001 level.
## F.2 P-Values
Figure 5 shows the corrected p-values for the gender bias of each word in the vertical axes vs. the value of the bias in the horizontal axes. p-values for SGNS and GloVe result from permutations tests whereas PMI uses the log odds ratio test.
All p-values have been corrected with BenjaminiHochberg separately for each setting. The plots for OpenSubtitles are very similar and are excluded for simplicity.
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![9_image_0.png](9_image_0.png)
![9_image_1.png](9_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
doostmohammadi-etal-2023-surface | Surface-Based Retrieval Reduces Perplexity of Retrieval-Augmented Language Models | https://aclanthology.org/2023.acl-short.45 | Augmenting language models with a retrieval mechanism has been shown to significantly improve their performance while keeping the number of parameters low. Retrieval-augmented models commonly rely on a semantic retrieval mechanism based on the similarity between dense representations of the query chunk and potential neighbors. In this paper, we study the state-of-the-art Retro model and observe that its performance gain is better explained by surface-level similarities, such as token overlap. Inspired by this, we replace the semantic retrieval in Retro with a surface-level method based on BM25, obtaining a significant reduction in perplexity. As full BM25 retrieval can be computationally costly for large datasets, we also apply it in a re-ranking scenario, gaining part of the perplexity reduction with minimal computational overhead. | # Surface-Based Retrieval Reduces Perplexity Of Retrieval-Augmented Language Models
Ehsan Doostmohammadi1∗ Tobias Norlund2,4 Marco Kuhlmann1 **Richard Johansson**2,3 1 Linköping University 2 Chalmers University of Technology 3 University of Gothenburg 4 Recorded Future
## Abstract
Augmenting language models with a retrieval mechanism has been shown to significantly improve their performance while keeping the number of parameters low. Retrieval-augmented models commonly rely on a semantic retrieval mechanism based on the similarity between dense representations of the query chunk and potential neighbors. In this paper, we study the state-of-the-art RETRO model and observe that its performance gain is better explained by surface-level similarities, such as token overlap. Inspired by this, we replace the semantic retrieval in RETRO with a surface-level method based on BM25, obtaining a significant reduction in perplexity. As full BM25 retrieval can be computationally costly for large datasets, we also apply it in a re-ranking scenario, gaining part of the perplexity reduction with minimal computational overhead.
## 1 Introduction
The introduction of the Transformer architecture
(Vaswani et al., 2017) has led to a performance boost in language modeling (see, e.g., Brown et al. 2020), but also to a steep increase of computational cost, as the number of parameters and data points is constantly growing. In reaction to this development, there has recently been a surge in work on retrieval-augmented language models (Izacard and Grave, 2021a; Li et al., 2022), which shows that enabling models to retrieve context from large corpora results in lower perplexity and better accuracy in downstream tasks such as question answering, while at the same time using considerably fewer parameters. In this paper, we specifically focus on the Retrieval-Enhanced Transformer architecture
(RETRO; Borgeaud et al., 2022).
By augmenting a language model with a retrieval mechanism, RETRO, like similar architectures, tries to decouple *memorization* of the training data from the additional *generalization* that
∗Correspondence to [email protected].
comes with increasing the number of parameters.
In RETRO, when a chunk of text (a sequence of tokens) has been generated, a dense representation of this chunk is used to retrieve the most similar neighboring chunks from a large retrieval set, based on their L2 distance. Having the previously generated chunks and their nearest neighbors in the retrieval set, the auto-regressive language model has now access to an extended context when predicting the next chunk. The informativeness of this context depends on the effectiveness of the retrieval method.
Borgeaud et al. (2022) note that part of RETRO's performance can be attributed to the token overlap between the generated chunks and the retrieval set. Our starting point in this paper is the observation that the performance gain is actually *better* explained by such surface-level similarities than by the L2 distance between the dense representations that RETRO uses for retrieval. This is in line with recent work by Norlund et al. (2023), who show that the reduction in loss observed in RETRO "almost exclusively" stems from such overlap rather than more sophisticated generalization. Based on these findings, we replace the semantic retrieval method in RETRO with one based on BM25 (Robertson et al., 1995), a surface-level measure. Our results show that retrieving nearest neighbors using BM25 during inference leads to a 13.6% lower perplexity, compared to dense retrieval based on sentence transformers (ST) (Reimers and Gurevych, 2019), a model trained to represent the semantic similarity between sentences.1 Finding the exact neighbors with BM25 is costly on large retrieval sets and might not meet the speed requirements of all applications of retrievalaugmented language models. We therefore explore a hybrid approach where we first retrieve approximate neighbors using ST representations and then 1The code and the data for this study can be accessed at github.com/edoost/retro_bm25.
re-rank them using BM25. We show that this approach yields 24.7% of the perplexity reduction we get with BM25-based retrieval, with only minimal computational overhead.
## 2 Method
We experiment with RETRO (Borgeaud et al., 2022)
as a state-of-the-art retrieval-augmented language model.
## 2.1 Model
RETRO is very similar to a standard auto-regressive language model such as T5 (Raffel et al., 2020), the main differences being the introduction of the retrieval mechanism and how the retrieved neighbors are used for language modeling.
Nearest Neighbor Retrieval In RETRO, all textual data is stored and used in chunks of 64 tokens. When the model has generated a chunk Cu, it retrieves the k nearest neighbors N1:k to that chunk, together with the chunks F1:k following these neighbor chunks in the retrieval data. It then generates the next chunk Cu+1 conditioned on the retrieved chunk pairs. Retrieval uses the squared L2 distance on a dense representation (DR) of chunks:
$$d(C_{u},N_{i})=\|D R(C_{u})-D R(N_{i})\|_{2}^{2}$$
This leaves us with
$$\operatorname{Ret}(C_{u})=([N_{u}^{1};F_{u}^{1}],\ldots,[N_{u}^{k};F_{u}^{k}])$$
as the retrieved neighbors that the model receives as additional context when generating the next chunk.
The likelihood of the first chunk (C1) does not depend on any neighbors; the model has access to no external context when generating that chunk. During training and perplexity evaluation, the retrieval process is filtered such that chunks originating from the same source document as the training sequence are never considered as neighbors.
Integration of the Neighbors RETRO improves auto-regressive language modeling by conditioning the next token prediction on the retrieved chunks of text. This means that the probability of generating the next token xt+1 depends not only on the previously generated tokens x1:t but also on the retrieved neighbors of the previously generated chunks, as well as their following chunks:
$P\left(x_{t+1}\mid x_{1:t},\text{RET}(C_{1}),\ldots,\text{RET}(C_{u-1});\theta\right)$
When generating the next token, the neighbors as well as the current chunk Cu are passed through a Transformer encoder. In the decoder, crossattention is over the output of that encoder and the concatenation of the intermediary embeddings of the last few tokens in the previous chunk Cu−1 and the already generated tokens in Cu, a mechanism called *chunked cross-attention*. For more details, see Borgeaud et al. (2022).
Implementation Details As an official implementation of RETRO is not publicly available, we draw upon the implementation in Norlund et al. (2023), which is based on the description in Borgeaud et al. (2022). Our implementation deviates only in that (1) we use learnable relative positional biases as in T5 (Raffel et al., 2020), with a bucket for each unique relative position; (2) instead of BERT (Devlin et al., 2019), we use the pretrained sentence transformers (ST) (Reimers and Gurevych, 2019) model to embed the chunks for the offline retrieval. ST is preferable over BERT,
as it is trained for the task of similarity search, and produces embeddings of lower dimensionality, which makes it more efficient. We use PyTorch
(Paszke et al., 2019) and PyTorch Lightning for distributed training. For the tokenization, we use the pre-trained T5 tokenizer (HuggingFace). For retrieving approximate neighbors, we use faiss
(Johnson et al., 2019), which performs efficient similarity search between dense representations with GPU support for faster indexing and retrieval.
## 2.2 Data
Borgeaud et al. (2022) use the *MassiveText* dataset
(Rae et al., 2021) for both training and retrieval. As this dataset is not publicly available, we set out to replicate it using open sources. *MassiveText* consists of multilingual text data in five categories:
Wikipedia articles, books, GitHub code, news, and common crawl web data. We use *Pile* (Gao et al.,
2021) and *RealNews* (Zellers et al., 2019) to build a large dataset resembling *MassiveText*'s composition. The new dataset (see Norlund et al. (2023)
for details) consists of 36M documents containing 52B tokens. For *Pile*, we keep the training and validation splits, while for *RealNews*, we use the full training set but downsample the validation set to 16,400 news articles to match the proportions of the categories in *Pile*. For details on the deduplication process, we refer to Gao et al. (2021) and Zellers et al. (2019).
## 2.3 Training
We use our dataset to train a RETRO model with approximately 630M parameters. For more details refer to Norlund et al. (2023). During training, we retrieve from the training set; during validation, we retrieve from the union of the training and validation sets. We train the model on sequences truncated to 1,024 tokens. The chunk size is 64, as in Borgeaud et al. (2022), and the number of retrieved neighbors is k = 2 for training and validation. We train the model for 140k training steps with a batch size of 16, taking seven days on 16 A100 GPUs.
This means that we use 6% of the training data during training, not including the retrieved neighbors.
As our optimizer, we use Adam (Kingma and Ba, 2015) with a fixed learning rate of 1e−4.
## 3 A Study On Correlations
We experiment with two settings: RETRO[ON],
the language model with retrieval enabled, and RETRO[OFF], where there are no chunk crossattention layers and therefore no retrieval, leaving us with a decoder-only language model. As shown by Borgeaud et al. (2022), the RETRO[ON] model performs better when it can exploit an overlap between the generated text and the retrieved neighbor.
This is more apparent in text categories with higher token overlap, such as GitHub. The studies in the RETRO paper also show that allowing more overlap when deduplicating the data results in a lower bits-per-byte (BPB2). Norlund et al. (2023) take this further to show even minimal overlap results in significant loss reduction, demonstrating the large extent RETRO relies on surface-level similarities.
These findings lead us to hypothesize that having a retrieval method that can find the highest overlapping neighbors will yield lower perplexity (PPL).
Because BERT, ST and similar deep representations of sentences do not always capture surfacelevel similarities, we set out to investigate where performance gains come from.
To this end, we measure how the PPL difference
(∆PPL) between RETRO[ON] and RETRO[OFF] for the current chunk (Cu, u ≥ 2) correlates with (1)
squared L2 distance between the ST embeddings of Cu and RET(Cu−1) (ST), and (2) unigram token overlap, based on T5 tokenization, between Cu
| X | Y | ρ | r |
|---------------|---------------|-------|-------|
| L22 (ST) | ∆PPL | 0.328 | 0.134 |
| token overlap | ∆PPL | 0.494 | 0.415 |
| L22 (ST) | token overlap | 0.464 | 0.515 |
and RET(Cu−1). The results, reported in Table 1, show a considerably stronger correlation between
∆PPL and unigram token overlap (measure 2) than between ∆PPL and L2 distance (measure 1). The trend is similar between Spearman and Pearson correlation coefficients.
## 4 Changing The Retrieval Method
As the results from the previous section show a stronger correlation between performance gain and surface-level similarity than ST similarity, we experiment with a retrieval method based on BM25.
## 4.1 Bm25
Okapi BM25, introduced by Robertson et al. (1995),
is a bag-of-words retrieval method based on tf–idf scores and some free parameters. These parameters are k1, which normalizes the term frequency, and b, which controls how much the length of a document would affect the term frequency values.
We use Pyserini (Lin et al., 2021), a Python interface to Lucene's BM25 implementation. We build the BM25 index on the training set and leave the free parameters at their default values (k1 = 0.9, b = 0.4). These values were also shown to perform the best by Karpukhin et al. (2020a). Using Lucene's Analyzer pipeline3results in more than 50M unique words for our corpus. We instead use the T5 tokenizer from Hugging Face Transformers
(Wolf et al., 2020) and limit our vocabulary to 32k words for the reranking experiments.
## 4.2 Retrieving With Bm25
We use the model described in Section 2.3 and change the retrieval method only at inference time to retrieve better neighbors. The results can be found in Table 2. The perplexity is 14.00 for 3Lucene Analyzers (Lucene) are used to extract index terms from text, which includes tokenization and preprocessing.
| Model | PPL | BPB |
|-------------------------------|-------|-------|
| RETRO[OFF] | 14.00 | 0.984 |
| RETRO[ON]-ST | 10.87 | 0.889 |
| RETRO[ON]-ST + BM25 reranking | 10.46 | 0.875 |
| RETRO[ON]-BM25 | 8.95 | 0.817 |
RETRO[OFF] and 10.87 for RETRO[ON] with ST retrieval (RETRO[ON]-ST), corresponding to a 22.3%
reduction in PPL. Replacing the retrieval method with BM25 (RETRO[ON]-BM25) gives an additional 13.7% reduction, which is 61.3% of the initial drop.
For comparability with Borgeaud et al. (2022), we also report BPB. The results show that using neighbors with more surface-level similarity to the generated chunk is a solid method for leveraging the retrieval mechanism to reduce the perplexity. If the retrieval augmentation is meant to act as an external memory, or to offload memorization from the model (Borgeaud et al., 2022), then BM25 is a more suitable method to achieve this goal.
## 4.3 Reranking
While the performance gain is significant, finding the *exact* neighbors using BM25 could be costly, depending on the size of the datasets. On the other hand, faiss provides an efficient similarity search for dense vectors to find the *approximate* neighbors.
Therefore, if enough of the BM25-retrieved neighbors could be found among top-k faiss-retrieved ones, with an efficient reranking, we could expect at least part of the performance gain with minimal computational overhead, as long as k is not significantly large. To find an optimal k, we first need to know how many of BM25 neighbors could be found in top-k faiss-retrieved chunks.
Looking at the faiss-retrieved neighbors, we see that of top-4 BM25-retrieved neighbors, 17.6%
appear in top-100 faiss-retrieved chunks, while the overlap is 22.1% for top-1000. We decide to continue our experiment with top-1000 neighbors, but it is obvious that one could get an even higher overlap with a higher k, with diminishing returns.
The results in Table 2 show that with the proposed reranking, RETRO[ON]-ST could achieve 21.3% of the PPL reduction of RETRO[ON]-BM25 compared to RETRO[ON]-ST. The reranking results are interesting not only due to their practical implications but also as an analysis revealing the limited number of high-quality neighbors that can be retrieved using semantic retrieval, even in situations where a large k is feasible.
## 5 Related Work
Augmenting language models with mechanisms that help them incorporate larger contexts has been approached extensively in different forms, such as Guu et al. (2018)'s retrieve-and-edit approach to reduce the PPL in language generation, and Asai et al. (2020) that make use of lexical overlap to improve the performance in question answering.
While retrieval-augmentation has been used with different objectives in mind, such as language modeling (Khandelwal et al., 2020; Wu et al., 2022)
and machine translation (Khandelwal et al., 2021),
question answering has been the application to attract the most interest (Guu et al., 2020; Karpukhin et al., 2020b; Izacard and Grave, 2021b).
An extensive study was performed by Izacard et al. (2022), showing that while we get performance gains using retrieval augmentation, training the retrieval part of the model would yield even more benefits. RETRO (Borgeaud et al., 2022),
on the other hand, aims at scaling such language models and therefore opts for keeping the retriever frozen, showing substantial PPL reduction with increasing either the number of language model parameters or the size of retrieval set.
Among the more recent work, Xu et al. (2023)
found that training using approximate neighbors resulted in a 2.6% decrease in perplexity. This suggests that non-exact neighbors may have a regularization effect, leading to improved generalization ability. Additionally, Ram et al. (2023) report a drop in perplexity using BM25 over BERT
retrieval using in-context retrieval-augmented language models.
## 6 Conclusions And Future Work
In this paper, we study the source of performance gains in RETRO, which could be generalized to similar retrieval-augmented language models. After observing that the PPL drop correlates more strongly with surface-level overlap between the query and the retrieved text, we replace the retrieval method with BM25, and observe a significant drop in PPL, which confirms us in the findings of the correlation study. This is also an interesting insight as to how these models work, which could be leveraged for performance gain in tasks like question answering where model relies on retrieving facts.
In the end, we also conduct an analysis to find out how much BM25 neighbors overlap with those retrieved using ST. The results show that while faiss is able to find some of the neighbors with high token overlap, the majority of them remain unretrieved. This is however, enough to gain part of the loss reduction achieved with a pure BM25 retrieval system.
The proposed methods could also be used during training. By retrieving more overlapping neighbors during training, the process of guiding the model to use retrieved neighbors for language modeling could be done more efficiently. This is particularly relevant when augmenting an already trained language model with a retrieval mechanism. As reported by Borgeaud et al. (2022), retrieval augmentation results in a larger drop in BPB as the number of model parameters and the size of retrieval data grow. This calls for more efficient methods based on surface-level similarities if we wish to exploit this potential. Furthermore, the retrieval system in RETRO is based on semantic retrieval, the model seems to rely more on surface-level similarities.
This could affect the generalizability capabilities of such models, which necessitates further investigations. Lastly, we only evaluate our modified RETRO model on language modeling. It would be interesting to know the impacts of BM25 retrieval on downstream tasks where retrieval is of use.
## Limitations
We only experiment with one type of retrievalaugmented language models, i.e., RETRO. However, the ways the other models retrieve neighbors and integrate them are not so much different to affect the results in this paper. The experiments in this paper are done with a small size RETRO model and data compared to the sizes considered by Borgeaud et al. (2022), due to computational limitations. According to the same authors, however, the gains should be constant with the increase of the model and retrieval set size. The larger models are mainly different in their behavior when there is no overlap. However, this should not affect the copying tendency of these models tremendously, as it is still the easiest way to generate the next token. It is also worth noting that RETRO[OFF],
while not using retrieval at test time, is still *trained* using retrieval - so it is not a complete retrievalfree model. The results presented by Borgeaud et al. (2022) however, show that RETRO[OFF] is on a par with their retrieval-free baseline in terms of BPB. Finally, we note that our evaluations have only considered the perplexity under teacher forcing, and we have not investigated the behavior of the model in free-form generation or with any kind of fine-tuning.
## Acknowledgements
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at Alvis partially funded by the Swedish Research Council through grant agreement no. 2022-06725, and by the Berzelius resources provided by the Knut and Alice Wallenberg Foundation at the National Supercomputer Center.
## References
Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph for question answering. In *International Conference on* Learning Representations.
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2206–2240.
PMLR.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling.
CoRR, abs/2101.00027.
Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437–450.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Realm: Retrievalaugmented language model pre-training. In *International Conference on Machine Learning*, pages 3929–3938. PMLR.
HuggingFace. Huggingface T5. Gautier Izacard and Edouard Grave. 2021a. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computational Linguistics.
Gautier Izacard and Edouard Grave. 2021b. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computational Linguistics.
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019.
Billion-scale similarity search with GPUs. *IEEE*
Transactions on Big Data, 7(3):535–547.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020a. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in
Natural Language Processing (EMNLP), pages 6769–
6781, Online. Association for Computational Linguistics.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020b. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–
6781, Online. Association for Computational Linguistics.
Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In International Conference on Learning Representations.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In *International Conference on Learning* Representations (ICLR).
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *ICLR (Poster)*.
Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. 2022. A survey on retrieval-augmented text generation. arXiv preprint 2202.01110.
Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, JhengHong Yang, Ronak Pradeep, and Rodrigo Nogueira.
2021. Pyserini: A python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '21, page 2356–2362, New York, NY, USA. Association for Computing Machinery.
## Apache Lucene. Lucene Analyzer.
Tobias Norlund, Ehsan Doostmohammadi, Richard Johansson, and Marco Kuhlmann. 2023. On the generalization ability of retrieval-enhanced transformers.
In *Findings of the Association for Computational Linguistics: EACL 2023*, pages 1485–1493, Dubrovnik, Croatia. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc.
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, H. Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew J. Johnson, Blake A.
Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher. *CoRR*, abs/2112.11446.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models. *arXiv preprint arXiv:2302.00083*.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing.
Association for Computational Linguistics.
Stephen Robertson, S. Walker, S. Jones, M. M. HancockBeaulieu, and M. Gatford. 1995. Okapi at TREC-3.
In Overview of the Third Text REtrieval Conference
(TREC-3), pages 109–126. Gaithersburg, MD: NIST.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. In *International Conference on Learning Representations*.
Frank F Xu, Uri Alon, and Graham Neubig. 2023. Why do nearest neighbor language models work? arXiv preprint arXiv:2301.02828.
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In *Advances in Neural Information Processing* Systems, volume 32. Curran Associates, Inc.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
It's the last section and it is unnumbered.
✓ A2. Did you discuss any potential risks of your work?
We mention that relying on surface-level similarities could affect the generalizability capabilities of such models, which necessitates further investigations.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
They could be found in the abstract, section 1 (introduction), and even the other sections.
✓ A4. Have you used AI writing assistants when working on this paper?
We used Grammarly to a limited extent.
## B ✓ **Did You Use Or Create Scientific Artifacts?** It'S All Over The Paper, But Mainly Section 2.
✓ B1. Did you cite the creators of artifacts you used?
It's all over the paper, but mainly section 2.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
One should consult to the main papers for that.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. It is not taken care of by us, but by the authors of those datasets.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Sections 2, 3, And 4.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 2.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Some of them are not applicable, but the rest are discussed in Section 2.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Yes. We will even publish our code later for absolute transparency.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
razdaibiedina-brechalov-2023-miread | {MIR}e{AD}: Simple Method for Learning High-quality Representations from Scientific Documents | https://aclanthology.org/2023.acl-short.46 | Learning semantically meaningful representations from scientific documents can facilitate academic literature search and improve performance of recommendation systems. Pretrained language models have been shown to learn rich textual representations, yet they cannot provide powerful document-level representations for scientific articles. We propose MIReAD, a simple method that learns highquality representations of scientific papers by fine-tuning transformer model to predict the target journal class based on the abstract. We train MIReAD on more than 500,000 PubMed and arXiv abstracts across over 2,000 journal classes. We show that MIReAD produces representations that can be used for similar papers retrieval, topic categorization and literature search. Our proposed approach outperforms six existing models for representation learning on scientific documents across four evaluation standards. | # Miread: Simple Method For Learning High-Quality Representations From Scientific Documents
Anastasia Razdaibiedina♢,♠ and **Alexander Brechalov**♢
♢University of Toronto and ♠Vector Institute [email protected] [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Learning semantically meaningful representations from scientific documents can facilitate academic literature search and improve performance of recommendation systems. Pretrained language models have been shown to learn rich textual representations, yet they cannot provide powerful document-level representations for scientific articles. We propose MIREAD, a simple method that learns highquality representations of scientific papers by fine-tuning transformer model to predict the target journal class based on the abstract. We train MIREAD on more than 500,000 PubMed and arXiv abstracts across over 2,000 journal classes. We show that MIREAD produces representations that can be used for similar papers retrieval, topic categorization and literature search. Our proposed approach outperforms six existing models for representation learning on scientific documents across four evaluation standards. 1
## 1 Introduction
A significant increase in the volume of scientific publications over the past decades has made the academic literature search a more challenging task.
One of the key steps to improve the recommendation systems (RS) for research articles is to obtain high-quality document-level representations.
Recently, transformer-based models have brought substantial progress to the field of natural language processing (NLP), obtaining state-of-the-art results on a variety of benchmarks (Vaswani et al., 2017; Devlin et al., 2018). While transformer models are effective in language modeling and learning 1MIReAD model weights are available through HuggingFace at https://huggingface.co/arazd/MIReAD
Abstracts and journal data is available through HuggingFace Hub at https://huggingface.co/datasets/brainchalov/
pubmed_arxiv_abstracts_data sentence representations, deriving document-level representations for scientific articles remains a challenge.
Previous transformer-based methods for representation learning on scientific documents are derived from BERT model (Devlin et al., 2018). Classic examples of such approaches are PubMedBERT,
BioBERT and SciBERT - scientific domain adaptations of BERT, which were pre-trained with masked language modeling (MLM) objective on PubMed abstracts, as well as full-text articles from PubMedCentral and Semantic Scholar, respectively (Gu et al., 2020; Lee et al., 2020; Beltagy et al., 2019).
While MLM objective allows to efficiently capture the context of the sentence, it cannot achieve accurate paper representations that can be used "off-theshelf" to discover similar articles. To address this problem, recent works explored fine-tuning the pretrained models with supervised objectives based on citation graphs (Wright and Augenstein, 2021; Cohan et al., 2020). Despite their efficiency, citationbased objectives have several disadvantages: (1)
citations are not distributed uniformly, with novel 530 papers and articles from certain fields being less favoured; (2) citations have a bias related to the increased self-citation and existence of over-cited papers; (3) citation graphs are often large and difficult to preprocess. Hence, there is a gap in representation learning for scientific articles, requiring approaches which would derive high-quality document-level representations, without relying on the citation graphs.
In this paper, we propose MIREAD, an approach that requires Minimal Information for Representation Learning of Academic Documents.
MIREAD combines the SciBERT architecture with novel training objective - a target journal classification. We show that such a simple training objective leads to high-quality representations of academic papers, suitable for RS usage. Figure 1 illustrates how MIREAD representations from unseen abstracts are separated based on scientific domain, even though this information was not accessed during training. We trained MIREAD by predicting one of 2,734 journal classes from the paper's title and abstract for 500,335 articles from PubMed and arXiv. Then we measured the quality of paper representations obtained with MIREAD
using three evaluation standards - linear evaluation, information retrieval, and clustering purity scores
- on three different datasets. MIREAD substantially outperforms 5 previous approaches (BERT,
PubMedBERT, BioBERT, SciBERT, CiteBERT)
across all evaluation benchmarks and outperforms SPECTER in most cases.
## 2 Methods 2.1 Miread
MIREAD is based on BERT architecture and we initialize it from SciBERT's weights. We fine-tune MIREAD to predict journal class solely from paper's abstract and title with cross-entropy loss:
$$L({\widehat{y_{i}}},y_{i})=-\sum_{i=1}^{N}y_{i}\log({\widehat{y_{i}}})$$
Here ybi and yi stand for predicted probability and ground truth label of the class i, N is equal to 2734, the total number of unique journal classes.
MIREAD takes as input a concatenation of paper's title and abstract, appended to the [CLS]
token, and separated by the [SEP] token:
Final paper representation v is obtained by passing the input through the transformer model, and taking the representation of the [CLS] token:
$$v={\mathrm{forward}}({\mathrm{input}})_{\,[\,{\mathrm{CLS}}\,]}$$
$$\mathbf{t})\left[\left({\mathsf{C L S}}\right)\right]$$
## 2.2 Dataset
To achieve a good coverage of different knowledge domains, we constructed a dataset from arXiv and PubMed abstracts and their corresponding metadata (title and journal) (Clement et al., 2019). We limited the number of abstracts per each journal to not exceed 300, and excluded journals with less than 100 abstracts or no publications in year 2021. The final dataset contains 500,335 abstracts
(181,967 from arXiv and 318,368 from PubMed),
covers 76 scientific fields and 2,734 journals. More details on dataset preparation are in Appendix A.1.
We fine-tune MIREAD for one epoch on all paper abstracts using 1e-6 learning rate.
## 2.3 Baseline Models
We compare MIREAD to six baseline approaches based on BERT (Devlin et al., 2018). We use the original BERT model, its three different domain adaptations: BioBERT (Lee et al., 2020), PubMedBERT (Gu et al., 2020) and SciBERT (Beltagy et al., 2019), as well as two representation extraction models trained with citation objectives:
CiteBERT (Wright and Augenstein, 2021) and SPECTER (Cohan et al., 2020). Additionally, we include SentenceBERT (Reimers and Gurevych, 2019) - a modification of the BERT model that includes siamese network structure to find semantically similar sentence pairs.
## 3 Evaluation Of Representations
We evaluate the information content of the representations of scientific abstracts produced by different approaches. Ideally, we are interested in representations that contain information about scientific domain, and allow to distinguish specific subdomains within larger fields. We use three common strategies for representation quality assessment: linear evaluation, clustering purity and information retrieval.
## 3.1 Linear Evaluation Of Representations
We first evaluate representations with commonly used *linear evaluation protocol* (Zhang et al., 2016; Oord et al., 2018; Chen et al., 2020). Under this
$${\mathrm{input}}=\ [\,\texttt{CLS}\,]\,{\mathrm{title}}\,[\,\texttt{SEP}\,]\,{\mathrm{abstract}}$$
| Task → | MAG | MeSH | arXiv & PubMed | Unseen journals | | | | |
|------------|-----------|-----------|------------------|-------------------|-----------|-----------|-----------|-----------|
| Model ↓ | F1 | Acc. | F1 | Acc. | F1 | Acc. | F1 | Acc. |
| BERT | 71.470.82 | 77.820.49 | 46.330.41 | 63.670.33 | 4.220.21 | 22.050.92 | 2.620.19 | 4.700.25 |
| PubMedBERT | 72.650.97 | 78.250.4 | 72.450.8 | 77.800.51 | 4.070.27 | 19.000.73 | 0.710.31 | 1.410.5 |
| BioBERT | 59.430.22 | 71.630.38 | 50.601.12 | 67.870.58 | 2.530.2 | 20.000.84 | 0.650.22 | 1.950.44 |
| SciBERT | 74.840.57 | 79.470.35 | 66.670.98 | 74.190.58 | 10.750.71 | 31.300.45 | 7.901.38 | 11.461.64 |
| CiteBERT | 70.490.58 | 76.400.21 | 55.941.23 | 67.800.30 | 9.260.49 | 29.051.07 | 6.720.73 | 10.190.79 |
| SentBERT∗ | 80.5 | − | 69.1 | − | − | − | − | − |
| SPECTER | 81.470.18 | 85.050.14 | 86.230.27 | 87.380.13 | 30.750.69 | 44.920.49 | 18.261.34 | 23.731.17 |
| MIREAD | 81.850.59 | 84.850.31 | 86.710.36 | 88.220.19 | 34.970.3 | 48.950.26 | 19.350.49 | 25.110.36 |
protocol, a linear classifier is trained on top of extracted representations, and test accuracy is used as a quality metric. Hence, better information content of the representations translates into higher classification accuracy. Details of training the logistic regression are provided in Appendix A.3. In our experiments, we perform linear evaluation of representations derived from the abstracts from four datasets, with varying degree of difficulty:
Academic topics In this task, we predict the research field of the paper using Microsoft Academic Graph (MAG) dataset (Sinha et al., 2015). MAG
provides paper labels, which are organized into a hierarchy of 5 levels. We follow SciDocs evaluation framework by Cohan et al. (2020), which provides a classification dataset with labels from level 1 topics (e.g. business, sociology, medicine etc.), and has a train-test split. Overall, MAG dataset consists of 19 classes and covers 25K papers.
Medical subject headings We use Medical Subject Headings (MeSH) dataset by Lipscomb (2000)
to classifiy academic paper representations into one of 11 disease classes (e.g. diabetes, cardiovascular disease etc.). Similarly to MAG, we use data with train and test splits provided by SciDocs. This dataset contains a total of 23K medical papers.
PubMed and arXiv categories We constructed a dataset of academic papers and their corresponding PubMed and arXiv categories. For fair comparison, we collected papers solely from journals that were not seen by MIREAD during training. For PubMed data, we used scientific topic identifiers that come with the journal metadata. For arXiv data, we omitted subcategories and used major categories (e.g. CS.ML and CS.CG were labeled as CS). To ensure that each paper is mapped to a single label, we used arXiv papers with all annotations coming from the same major category. This dataset contains 12K papers across 54 scientific field categories (e.g. physics, computer science, bioinformatics, molecular biology etc.).
Unseen journal classification This task evaluates whether the learned representations contain very detailed information that allows to distinguish which journal the paper comes from. Since this task resembles MIREAD training objective, we only used journal classes that were not seen during training. This dataset contains the same 12K papers from PubMed and arXiv as the previous task, and 200 journal labels.
We report test set performance of the linear classifier selected by maximal validation set accuracy, and use 4-fold cross validation.
## 3.2 Clustering Purity
In our subsequent experiments, we evaluate feature performance when they are used "off-the-shelf",
without any finetuning. Such scenario is important for measuring quality of the representations, since it more closely resembles paper search with RS.
Following pre-trained representations assessment strategy from Aharoni and Goldberg (2020), we first evaluate clustering using *purity* metric, a widely adopted metric of clustering quality based on intra-cluster similarity (Manning et al., 2010). Higher clustering purity indicates model's ability to provide representations that can be more easily grouped into meaningful clusters, such as academic topics. We show results on MAG and MeSH
datasets, and perform clustering with k-means algorithm with an increasing number of clusters (10, 20, 50, 100). We compute purity score between ground truth annotations and k-means clustering labels.
| Number of clusters | | | | |
|----------------------|-------|-------|-------|-------|
| Method ↓ | 10 | 20 | 50 | 100 |
| BERT | 29.51 | 31.51 | 34.50 | 37.08 |
| PubMedBERT | 32.45 | 32.70 | 37.30 | 40.30 |
| BioBERT | 33.45 | 35.45 | 41.36 | 45.30 |
| SciBERT | 29.02 | 31.45 | 35.22 | 38.13 |
| CiteBERT | 29.22 | 30.53 | 33.90 | 36.73 |
| SPECTER | 57.28 | 65.07 | 70.87 | 74.21 |
| MIREAD | 57.38 | 64.78 | 72.15 | 76.26 |
## 3.3 Information Retrieval
In this final part of our evaluation framework, we measure the quality of representations according to the *information retrieval* perspective. Information retrieval is the process of searching and returning relevant items (in our case scientific documents)
based on the input query (Manning et al., 2010).
For RS, relevant research papers are found based on similarity score between frozen representations.
Hence, evaluating how relevant the recommended documents are based on the query document can indicate the quality of the pretrained representations.
For this experiment, we use arXiv subcategories as more stringent labels to measure relevance of representation retrieval (Clement et al., 2019). We collect arXiv papers with their subcategories metadata from six different fields: Computer Science
(CS), Mathematics (Math), Physics (Phys), Electrical Engineering and Systems Science (EESS), Economics (Econ) and Statistics (Stat). We perform independent evaluation of subcategories within each field.
We use a commonly adopted evaluation scheme, when pairs of representations are ranked from highest to lowest based on their Pearson's correlation score. Each pair receives a ground truth label of 0 if no subcategories overlap, or 1 otherwise. We report average precision (AP) and area under curve
(AUC) scores as final information retrieval metrics.
## 4 Results
We compared MIREAD with the original BERT
model and 5 other approaches that use BERT architecture: PubMedBERT, BioBERT, SciBERT, CiteBERT and SPECTER.
Table 1 shows results of the linear evaluation of representations obtained from seven different models on four tasks/datasets (See Methods). Overall, MIREAD **shows a substantial increase in accuracy and F1 score on all four tasks**. On MAG
and MeSH tasks MIREAD achieved 84.85% and 88.22% accuracy respectively, (81.85 and 86.71 in F1 score). Similarly, MIREAD showed substantial improvement compared to other BERT-based models on 54 PubMed/ArXiv Categories classification and 200 Unseen Journals classification tasks.
MIREAD performance is the closest to SPECTER,
although MIREAD outperforms SPECTER in F1 scores across all 4 presented datasets, with statistically significant improvement in 3 cases out of 4.
To measure significance of improvement, we performed unpaired t-test between scores of both approaches. The p-values of t-test between F1 scores across 5 runs of SPECTER and MIREAD are 0.2, 0.04, 0.0001 and 0.05, for MAG, MeSH, arxiv &
PubMed, and unseen journals datasets, demonstrating the significant differences for MeSH, arxiv &
PubMed, and unseen journals.
We evaluated the quality of representations with the purity metric of k-means clusters. To compute clustering purity, each cluster is assigned to its "true" class (most frequent class in the cluster), then accuracy is measured by counting the number of correctly assigned documents and dividing by the number of samples. Clustering purity on MeSH (shown in Table 2) and MAG (shown in Appendix A.4, Table 4) datasets has shown that MIREAD achieves the performance better (on MeSH) or equal (on MAG) to the performance of SPECTER. Both MIREAD and SPECTER significantly outperform all other tested models.
Similar results were obtained on information retrieval experiments with arXiv subcategories (Average Precision is shown in Table 3). Although, SPECTER showed better precision for Math and Physics categories, MIREAD outperformed in Economics, Computer Sciences (CS) and Electrical Engineering and Systems Science (EESS) categories of arxiv dataset with the improvement of Average Precision of +12.1% , +11.6% and +4.7%, correspondingly.
Overall, three types of evaluations on various datasets reveal that MIREAD **produces powerful representations of academic papers** whose information content outperforms or matches the performance of the current state-of-the-art feature extraction models.
| Method | CS | Math | Phys | EESS | Econ | Stat |
|----------|-------|--------|--------|--------|--------|--------|
| BERT | 20.86 | 13.28 | 21.70 | 65.41 | 61.49 | 61.10 |
| PMBERT | 21.00 | 12.54 | 22.81 | 65.79 | 72.05 | 63.36 |
| BioBERT | 22.98 | 13.07 | 23.26 | 66.28 | 67.40 | 64.70 |
| SciBERT | 23.26 | 14.97 | 21.84 | 67.48 | 64.71 | 62.91 |
| CiteBERT | 18.75 | 12.59 | 17.50 | 65.70 | 55.74 | 60.47 |
| SPECTER | 31.97 | 27.78 | 37.17 | 72.53 | 69.66 | 63.91 |
| MIREAD | 35.69 | 19.15 | 34.69 | 75.91 | 78.12 | 63.99 |
## 5 Conclusions
We present MIREAD, a transformer-based method for representation learning of research articles using minimal information. We fine-tuned MIREAD
by predicting the target journal using the paper's title and abstract, and assembled a training dataset spanning over half a million data points from over two thousands journals. We show that this simple training objective results in high-quality documentlevel representations, which can be used for various applications and are suitable for recommendation systems.
Earlier we have seen this effect on biological data - where the prediction of subcellular localization (dozens of classes) (Razdaibiedina and Brechalov, 2022) or protein (thousands of classes)
(Razdaibiedina et al., 2023) from the fluorescent microscopy images allows to obtain high-quality features. These resulting features had higher information content and could be applied for solving various downstream analysis tasks. Similarly to our findings, more classification labels improved feature quality, which was reflected in downstream task performance. We found that journal title is a high-quality label for scientific manuscripts, which can be explained by several reasons. Firstly, scientific journals are often highly specialized and focused on a single topic. Therefore, the journal name can serve as a precise topic label. Additionally, journals with different Impact Factors may accept slightly different types of research works, making journal name a valuable human-annotated label. In our study, the number of journals was determined by available datasets. In a preliminary experiment, we found that increasing the number of labels resulted in better specificity of the representations (data not shown). For example, an increase from 100 to 1000 labels helps the model to learn better separations between sub-fields (e.g.medical sub-domains). We found that lower-level labels encourage the model to learn more fine-grained features to distinguish between journal classes, while high-level labels encourage model to focus on few important features, which may lead to oversimplification of representations content.
Our experimental results show that MIREAD
substantially outperforms 6 previous approaches
(BERT, PubMedBERT, BioBERT, SciBERT, CiteBERT, SentenceBERT) across three evaluation benchmarks, and outperforms SPECTER, the current SOTA approach for representation learning on scientific articles, in most cases. The major advantage of MIREAD compared to SPECTER
is that MIREAD uses solely paper's abstract and metadata, but does not require the additional information, such as the reference graph. Hence, MIREAD can be trained on novel papers that have not obtained citations or papers that have no open access.
## 6 Limitations
The underlying assumption of our method is that abstract reflects the entire article, creating an unbiased summary of the paper. However, abstract does not guarantee an objective representation of the paper, can often emphasize the main findings while discarding details that the authors deem insignificant. This can lead to potential inaccuracies in paper representations, affecting the results of paper retrieval and recommendation.
Also, in this work we did not exhaust all possible training settings and evaluation strategies due to limited resources. We perform evaluation using three different standards. While we selected the most relevant evaluation tasks, it would be interesting to assess the quality of representations in other ways, such as citation graph reconstruction, predicting reader activity and other clustering-based evaluations. Additionally, with the emergence of large-scale language models, another interesting direction for future research is to investigate the relationship between model size and final performance.
## Acknowledgement
We would like to thank Vector Institute for providing computational resources to run the experiments.
Anastasia Razdaibiedina is supported by Vector Institute Postgraduate Affiliate Fellowship.
## References
Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. *arXiv* preprint arXiv:2004.02105.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert:
A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR.
Colin B. Clement, Matthew Bierbaum, Kevin P.
O'Keeffe, and Alexander A. Alemi. 2019. On the use of arxiv as a dataset.
Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, and Daniel S Weld. 2020. Specter: Document-level representation learning using citation-informed transformers. arXiv preprint arXiv:2004.07180.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain-specific language model pretraining for biomedical natural language processing.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining.
Bioinformatics, 36(4):1234–1240.
Carolyn E Lipscomb. 2000. Medical subject headings
(mesh). *Bulletin of the Medical Library Association*,
88(3):265.
Christopher Manning, Prabhakar Raghavan, and Hinrich Schütze. 2010. Introduction to information retrieval.
Natural Language Engineering, 16(1):100–103.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018.
Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*.
Anastasia Razdaibiedina and Alexander Brechalov.
2022. Learning multi-scale functional representations of proteins from single-cell microscopy data.
arXiv preprint arXiv:2205.11676.
Anastasia Razdaibiedina, Alexander V Brechalov, Helena Friesen, Mojca Mattiazzi Usaj, Myra Paz David Masinas, Harsha Garadi Suresh, Kyle Wang, Charlie Boone, Jimmy Ba, and Brenda J Andrews. 2023. Pifia: Self-supervised approach for protein functional annotation from single-cell imaging data. *bioRxiv*,
pages 2023–02.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing. Association for Computational Linguistics.
Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June Hsu, and Kuansan Wang. 2015. An overview of microsoft academic service (mas) and applications. In *Proceedings of the 24th international* conference on world wide web, pages 243–246.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. *arXiv preprint* arXiv:1910.03771.
Dustin Wright and Isabelle Augenstein. 2021. Citeworth: Cite-worthiness detection for improved scientific document understanding. *arXiv preprint* arXiv:2105.10912.
Richard Zhang, Phillip Isola, and Alexei A Efros. 2016.
Colorful image colorization. In *European conference* on computer vision, pages 649–666. Springer.
## A Appendix A.1 Dataset Preparation
PubMed. Since available PubMed datasets did not contain all the necessary metadata, we created a custom dataset by parsing PubMed artciles. We searched PubMed e-utils interface with the custom Python script. The query contained the journal's ISSN and a year of publication. We run through the list of journals from https://www.scimagojr.com website and performed searches for years from 2016 to 2021. The list of retrieved PMID then was split into batches of no more than 200 items each and used to download the articles in xml format. The xml page then was parsed for PMID,
title, abstract, name of the journal and date of the publication. We only saved articles whose abstracts were written in English to a file. Next, the final list of journals was filtered, such that remaining journals had at least 300 publication in the period of 2016-2021 and at least 1 publication in 2021. For the final dataset, we limited number of articles per journal to 300.
arXiv. We used a dataset of arXiv articles https://huggingface.co/datasets/arxiv_dataset available at HuggingFace (Wolf et al., 2019). We limited the number of abstracts per each journal to not exceed 300, and excluded journals with less than 100 abstracts or no publications in 2021. Overall, the arXiv dataset contained >171K
abstracts after preprocessing.
## A.2 Computing Resources
We used resources provided by Vector Institute cluster with 528 GPUs, 6 GPU nodes of 8 x Titan X, and 60 GPU nodes each with 8 x T4, for development and deployment of large-scale transformerbased NLP models.
## A.3 Linear Probing Experiments
For our linear probing experiments, we used multinomial logistic regression with a learning rate of 5e-4 and batch size of 100, which we trained for 5 epochs. We did not add a regularization penalty as we found that the regression model did not overfit due to its simplicity. We used 4-fold crossvalidation with early stopping based on the maximal validation set performance, and our final performance is averaged across all cross-validation runs.
## A.4 Clustering Purity On Mag Dataset
We include results for clustering purity experiments on MAG datset in Table 4.
Method 10 20 50 100
BERT 38.92 50.94 57.49 60.43
PubMedBERT 31.64 47.49 58.41 60.48
BioBERT 43.44 56.38 61.63 65.27 SciBERT 46.98 48.83 57.35 60.11 CiteBERT 33.9 45.05 51.73 55.55 SPECTER 61.95 75.03 78.07 78.67
MIReAD 61.03 71.9 75.31 78.63
Table 4: Clustering purity on MAG dataset with kmeans clustering of frozen representations. Results with 10, 20, 50 and 100 clusters across seven methods are reported.
## A.5 Arxiv Subcategories
Table 5 includes a description of arXiv subcategories that we used to form a category for article topic classification.
| Categories | ## | Subcategories | |
|-------------------------|---------|--------------------------------------------------------------------------------------------------------------------------|-------------|
| Computer Science (CS) | 40 | Artificial Intelligence, Hardware Architecture, Computational Complexity, Computational Engineering, Finance, and Science, Computational Geometry, Computation and Language, Cryptography and Security, Computer Vision and Pattern Recognition, Computers and Society, Databases, Distributed, Parallel, and Cluster Computing, Digital Libraries, Discrete Mathematics, Data Structures and Algorithms, Emerging Technologies, Formal Languages and Automata Theory, General Literature, Graphics, Computer Science and Game Theory, Human-Computer Interaction, Information Retrieval, Information Theory, Machine Learning, Logic in Computer Science, Multiagent Systems, Multimedia, Mathematical Software, Numerical Analysis, Neural and Evolutionary Computing, Networking and Internet Architecture, Other Computer Science, Operating Systems, Performance, Programming Languages, Robotics, Symbolic Computation, Sound, Software Engineering, Social and Information Networks, Systems and Control | |
| Mathematics (Math) | 32 | Commutative Algebra, Algebraic Geometry, Analysis of PDEs, Algebraic Topology, Classical Analysis and ODEs, Combinatorics, Category Theory, Complex Variables, Differential Geometry, Dynamical Systems, Functional Analysis, General Mathematics, General Topology, Group Theory, Geometric Topology, History and Overview, Information Theory, K-Theory and Homology, Logic, Metric Geometry, Mathematical Physics, Numerical Analysis, Number Theory, Operator Algebras, Optimization and Control, Probability, Quantum Algebra, Rings and Algebras, Representation Theory, Symplectic Geometry, Spectral Theory, Statistics Theory | |
| Physics (Phys) | 51 | Cosmology and Nongalactic Astrophysics, Earth and Planetary Astrophysics, Astrophysics of Galaxies, High Energy Astrophysical Phenomena, Instrumentation and Methods for Astrophysics, Solar and Stellar Astrophysics, Disordered Systems and Neural Networks, Mesoscale and Nanoscale Physics, Materials Science, Other Condensed Matter, Quantum Gases, Soft Condensed Matter, Statistical Mechanics, Strongly Correlated Electrons, Superconductivity, General Relativity and Quantum Cosmology, High Energy Physics - Experiment, High Energy Physics - Lattice, High Energy Physics - Phenomenology, High Energy Physics - Theory, Mathematical Physics, Adaptation and Self-Organizing Systems, Chaotic Dynamics, Cellular Automata and Lattice Gases, Pattern Formation and Solitons, Exactly Solvable and Integrable Systems, Nuclear Experiment, Nuclear Theory, Accelerator Physics, Atmospheric and Oceanic Physics, Applied Physics, Atomic and Molecular Clusters, Atomic Physics, Biological Physics, Chemical Physics, Classical Physics, Computational Physics, Data Analysis, Statistics and Probability, Physics Education, Fluid Dynamics, General Physics, Geophysics, History and Philosophy of Physics, Instrumentation and Detectors, Medical Physics, Optics, Plasma Physics, Popular Physics, Physics and Society, Space Physics, Quantum Physics | |
| Electrical Engineering | 4 | Audio and Speech Processing, Image and Video Processing, Signal Processing, Systems | |
| and | Systems | Science | and Control |
| (EESS) Economics (Econ) | 3 | Econometrics, General Economics, Theoretical Economics | |
| Statistics (Stat) | 6 | Applications, Computation, Methodology, Machine Learning, Other Statistics, Statistics Theory Table 5: arXiv categories. | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
✓ A2. Did you discuss any potential risks of your work?
Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
jang-etal-2023-know | {KNOW} How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations | https://aclanthology.org/2023.acl-short.47 | While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs. In this work, we leverage external knowledge bases to significantly improve on an existing adversarial attack for detecting inconsistent NLEs. We apply our attack to high-performing NLE models and show that models with higher NLE quality do not necessarily generate fewer inconsistencies. Moreover, we propose an off-the-shelf mitigation method to alleviate inconsistencies by grounding the model into external background knowledge. Our method decreases the inconsistencies of previous high-performing NLE models as detected by our attack. |
## Know **How To Make Up Your Mind! Adversarially Detecting And** Alleviating Inconsistencies In Natural Language Explanations
Myeongjun Jang1 Bodhisattwa Prasad Majumder3 **Julian McAuley**3 Thomas Lukasiewicz2,1 **Oana-Maria Camburu**4 1University of Oxford, UK 2Vienna University of Technology, Austria 3University of California San Diego 4University College London, UK
[email protected]
## Abstract
While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs. In this work, we leverage external knowledge bases to significantly improve on an existing adversarial attack for detecting inconsistent NLEs. We apply our attack to high-performing NLE models and show that models with higher NLE
quality do not necessarily generate fewer inconsistencies. Moreover, we propose an offthe-shelf mitigation method to alleviate inconsistencies by grounding the model into external background knowledge. Our method decreases the inconsistencies of previous highperforming NLE models as detected by our attack.
## 1 **Introduction**
The accurate yet black-box nature of deep neural networks has accelerated studies on explainable AI. The advent of human-written natural language explanations (NLEs) datasets (Wiegreffe and Marasovic, 2021) has paved the way for the development of models that provide NLEs for their predictions. However, by introducing an adversarial attack, which we hereafter refer to as eIA
(explanation Inconsistency Attack), Camburu et al.
(2020) found that an early NLE model (Camburu et al., 2018) was prone to generate inconsistent NLEs (In-NLEs). More precisely, two *logically* contradictory NLEs generated by a model for two instances that have the same context are considered to form an *inconsistency*. For example, assume a self-driving car stops in a given traffic environment (the context). If the passenger asks the car Q1:"Why did you stop?", and it provides NLE1:
"Because the traffic light is red.", and, for the same context, if the passenger instead asks Q2: "Why did you decide to stop here?" and the car provides 540 NLE2: "Because the traffic light is green", then NLE1 and NLE2 form an inconsistency.
A model that generates In-NLEs is undesirable, as it *either has a faulty decision-making process*
(e.g., the traffic light was green, so the car should not have stopped), or it *generates NLEs that are* not faithfully describing its decision-making process (e.g., the car stopped for a red traffic light, but states that it was green) (Camburu et al., 2020).
While recent high-performing NLE models have largely improved in terms of the quality (plausibility) of the generated NLEs, to our knowledge, these models have not been tested against generating inconsistent NLEs.
In this work, we first propose a fast, efficient, and task-generalizable adversarial attack that utilizes external knowledge bases. Through experiments on two datasets and four models, we verify the increased efficiency of our approach over the eIA
attack, the only inconsistency attack for NLE models, to our knowledge. We also show that the highperforming NLE models are still prone to generating significantly many In-NLEs and, surprisingly, that a higher NLE quality does not necessarily imply fewer inconsistencies. Second, we propose a simple yet efficient off-the-shelf method for alleviating inconsistencies that grounds any NLE model into background knowledge, leading to fewer inconsistencies. The code for this paper is available at https://github.com/MJ-Jang/eKnowIA.
## 2 **Inconsistency Attack**
We propose eKnowIA (explanations **Know**ledgegrounded Inconsistency Attack), which detects more In-NLEs in a faster and more general manner than eIA.
## 2.1 **Original Eia Attack**
Setting. Given an instance x, Camburu et al.
(2020) divide it into: the *context* part xc that remains fixed, and the *variable* part xv that is changed during the attack. For example, xc and xv would be a *premise* and a *hypothesis*, respectively, for natural language inference (NLI - detailed below). Let em(x) denote the NLE generated by a model m for the input x = (xc, ˆxv). The objective is to find ˆxv such that em(x) and em((xc, ˆxv)) are logically contradictory (see examples in Table 10).
Steps. The eIA attack has the following steps:
1. Train a neural model to act as a reverse explainer, called REVEXPL, that takes xc and em(x) as input and generates xv, i.e.,
REVEXPL(xc; em(x)) = xv.
2. For each generated NLE em(x):
(a) Automatically create a set of statements Ie that are inconsistent with em(x).
(b) For each eˆ ∈ Ie, generate a variable part ˆxv = REVEXPL(xc; ˆe).
(c) Query m on xˆ = (xc, ˆxv) to get em(ˆx).
(d) Check whether em(ˆx) is indeed inconsistent with em(x) by checking whether em(ˆx) is included in Ie.
Creating Ie. Camburu et al. (2020) used simple elimination of negation (removing "not" or "n't") and a task-specific template-based approach for this step. For the template-based approach, they manually create a set of label-specific templates for NLEs such that introducing the instance-specific terms of an NLE from one template into any template from another label creates an inconsistency.
They illustrate this process only on the e-SNLI
dataset (Camburu et al., 2018), leaving room to question how easily it generalizes to other datasets.
e-SNLI contains NLEs for the SNLI dataset (Bowman et al., 2015), where the NLI task consists in identifying whether a *premise* and a *hypothesis* are in a relation of *entailment* (if the premise entails the hypothesis), *contradiction* (if the hypothesis contradicts the premise), or *neutral* (if neither entailment nor contradiction hold). Examples of their templates are: "<X> is <Y>" (for *entailment*) and
"<X> cannot be <Y>" (for *contradiction*). Based on the templates, for a em(x) of "A dog is an animal.",
an inconsistent statement of "A dog cannot be an animal" is obtained (<X> = "A dog", <Y> = "an animal"). They manually identified an average of 10 templates per label.
## 2.2 **Our Eknowia Attack**
The template-based approach in eIA has two major drawbacks: (1) requires substantial human effort to find an exhaustive set of templates for *each* dataset, (2) many different ways of obtaining inconsistencies (e.g., using antonyms) are not taken into account. Moreover, even their negation rule can also be improved. To alleviate these drawbacks, we adopt three rules.
Negation. We remove and add negation tokens to negated and non-negated sentences, respectively.
To avoid grammatical errors, we add one negation per sentence only if the sentence belongs to one of the following two templates:
- <A> is <B>, <A> are <B> (add "not"),
- <A> has <B>, <A> have <B> (add "does/do not" only if <B> is a noun).
Antonym replacement for adjectives/adverbs.
We replace adjectives/adverbs with their antonyms from ConceptNet (Speer et al., 2017) (using the NLTK POS tagger). Only one adjective or adverb at a time is replaced for each NLE, to avoid deteriorating the contradictory meaning. Employing other abundant thesauruses could improve our approach, which we leave as future work.
Unrelated noun replacement. We replace a noun with an unrelated one, e.g., "human" with "plant".
This is only applied to the noun that is the last word of the sentence, to reduce the possibility of false inconsistencies as the part-of-speech (POS) tagger occasionally made incorrect predictions for words in the middle of a sentence. To get unrelated nouns, we use the *DistinctFrom* and *Antonym* relations in ConceptNet. However, we noticed that ConceptNet contains noisy triplets where the subject and object are not antonyms, such as "man" and "people" for
"person"1. To avoid these, we created a list (see Table 8 in the appendix) of triplets from ConceptNet to be ignored, by manually investigating a random subset of 3000 detected inconsistencies. While this involved human effort, we highlight that this is due to the nature of ConceptNet and other knowledge bases with more accurate instances may be used instead. However, since we only found eight noisy triplets, we decided to keep ConceptNet, which otherwise worked well for our datasets. Finally, we also noticed that our rules may not lead to In-NLEs if both the context and variable part contain negations. Examples are in Table 9 in the appendix. We filter out such pairs.
## 2.3 **Experiments**
Datasets. We consider two tasks: NLI (with the e-SNLI dataset described in Sec. 2.1) and com-
1A pair of words with *opposite* meanings (from Wikepedia).
Model e-SNLI Cos-E
Acc. Sr Hr e-ViL Acc. Sr Hr e-ViL
NILE 90.7 3.13 2.27 0.80 - - - -
KnowNILE 90.9 2.42† 1.99† **0.82** - - - -
CAGE - - - - 61.4 0.42 0.06 0.43
KnowCAGE - - - - 62.6 0.11† 0.01† **0.44**
WT5-base 90.6 12.88 1.70 0.76 65.1 0.95 0.12 0.55
KnowWT5-base 90.9 11.45 1.19† 0.80† 65.5 0.84† 0.09† **0.56**
Table 1: Results of our eKnowIA attack and our method for mitigating In-NLEs. The best results for each pair of
(model, Know-model) are in bold; Sr and Hr are given in %; † indicates that Know-models showed statistically
significant difference with p-value < 0.05 (†) using the t-test.
Table 2: Comparison between eIA and eKnowIA on WT5-base. The best results are in bold; Sr is given in %; Hr values are in fractions to emphasise the high denominators of the eIA.
monsense question answering (CQA). The CosE 1.0 dataset (Rajani et al., 2019) contains CQA instances formed of a *question*, three *answer candidates*, and an NLE for the correct answer. The objective of the Cos-E (Rajani et al., 2019) dataset is to select an *answer* among the three candidates given a *question* and to generate an NLE to support the answer. Following Camburu et al. (2020), we set the premise as context and the hypothesis as the variable part for e-SNLI. For Cos-E, to avoid omitting the correct answer, we set the question and the correct answer as the context, and the remaining two answer candidates as the variable part. Just like eIA, our attack is solely intended for detecting In-NLEs and not as a label attack (which may or may not happen).
Evaluation metrics. Let Ie be generated at Step 2a for each instance in a test set D*test*, and let Is ⊆
Ie be the set of detected In-NLEs (after Step 2d).
For each instance, our attack can identify multiple inconsistencies (via multiple variable parts). We, therefore, use two evaluation metrics: hit-rate (Hr)
and success-rate (Sr):
Sr = Nc/|D*test*| and Hr = |Is|/|Ie|, where Nc is the number of unique instances for which the attack identified at least one inconsistency. Intuitively, Sr denotes the ratio of the test instances where the attack is successful, while Hr denotes the ratio of detected In-NLEs to that of the proposed In-NLEs.
Models. We consider the following highperforming NLE models, with their implementation detailed in Appendix A.1: NILE (Kumar and Talukdar, 2020) for NLI, CAGE (Rajani et al., 2019) for CQA, and WT5-base (220M
parameters) (Narang et al., 2020) for both tasks. WT5 models with more parameters (e.g.,
WT5-11B) would require considerably more computing while providing relatively small gains in NLE quality (32.4 for WT5-base vs. 33.7 for WT5-11B (Narang et al., 2020)). Therefore, they are not considered here due to limited computing resources. Implementation details are given in Appendix A.1.
| Dataset | Method | Time | Sr | Hr |
|-----------|----------|----------|-----------|---------|
| e-SNLI | eIA | 10 days | 2.19 | 384/24M |
| eKnowIA | 40 min | 12.88 | 1,494/88K | |
| Cos-E | eIA | 2.5 days | 0.32 | 5/5M |
| eKnowIA | 5 min | 0.95 | 13/11K | |
## 2.4 **Results**
eKnowIA vs. eIA. We compare eKnowIA with eIA only on the WT5-base model, since eIA requires a prohibiting amount of time. As in Camburu et al. (2020), we manually verified the naturalness of adversarial hypotheses on 50 random samples for each model. Sentences that go against common sense are considered unnatural. Minor grammatical errors and typos are ignored. We observe that 81.5% of the adversarial hypotheses were natural, on average, for each model. Details are in Appendix A.4. The results are summarized in Table 2. The e-SNLI results are adjusted to reflect the proportion of natural adversarial hypotheses by multiplying the number of detected pairs of In-NLEs for each model with the estimated naturalness ratio. For Cos-E, an unnatural variable part would consist of stop words or a repetition of another answer candidate. We automatically found 2 out of 22 examples to be unnatural, which were removed. We observe that eIA generates a tremendous amount of inconsistent candidates (Ie),
e.g., 24M for e-SNLI, thus being extremely slow
(e.g., 10 days vs. 40 min for eKnowIA), while also obtaining lower Sr and Hr than eKnowIA (e.g.,
2.19% vs. 12.88% Sr).
eKnowIA on NLE models. The results of eKnowIA applied to NILE, CAGE, and WT5 are in
| PREMISE: A man is riding his dirt bike through the air in the desert. | |
|------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|
| HYPOTHESIS: A man is on a motorbike | HYPOTHESIS: The man is riding a motorbike. |
| PREDICTED LABEL: entailment | PREDICTED LABEL: contradiction |
| EXPLANATION: A dirt bike is a motorbike. | EXPLANATION: A dirt bike is not a motorbike. |
| QUESTION: John knew that the sun produced a massive amount of energy in two forms. If you were on the surface of the sun, what would kill you first? | |
| CHOICES: heat, light, life on earth | CHOICES: heat, light, darkness |
| PREDICTED LABEL: heat | PREDICTED LABEL: heat |
| EXPLANATION: the sun produces heat and light. | EXPLANATION: the sun produces heat and darkness. |
Table 3: Examples of inconsistent NLEs detected by eKnowIA for WT5 on e-SNLI and CAGE on Cos-E. The first column shows the original variable part, and the second column shows the adversarial one.
| PREMISE: A man is riding his dirt bike through the air in the desert. | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|-------------------------|
| HYPOTHESIS: A man is on a motorbike | HYPOTHESIS: The man is riding a motorbike. | |
| PREDICTED LABEL: entailment | PREDICTED LABEL: entailment | |
| EXTRACTED KNOWLEDGE: {dirt bike, IsA, motorcycle}, | EXTRACTED KNOWLEDGE: {dirt bike, IsA, motorcycle}, | |
| {desert, MannerOf, leave}, {air, HasA, oxygen} | {desert, MannerOf, leave}, {air, HasA, oxygen} | |
| EXPLANATION: A dirt bike is a motorbike. | EXPLANATION: A dirt bike is a motorbike. | |
| QUESTION: John knew that the sun produced a massive amount of energy in two forms. If you were on the surface of the sun, what would kill you first? | | |
| CHOICES: heat, light, life on earth | CHOICES: heat, light, darkness | |
| PREDICTED LABEL: heat | PREDICTED LABEL: heat EXTRACTED KNOWLEDGE: | {light, Antonym, dark}, |
| EXTRACTED KNOWLEDGE: {light, IsA, energy} ,{heat, IsA, energy} | {heat, IsA, energy} | |
| EXPLANATION: light and heat are two forms of energy. | EXPLANATION: the sun produces heat and light. | |
Table 4: Examples of successfully defended instances by KnowWT5 on e-SNLI and KnowCAGE on Cos-E. This table should be read together with Table 3 to appreciate the defence.
the upper lines of each block in Table 1. All models are vulnerable to the inconsistency attack. Also, a better NLE quality may not necessarily guarantee fewer inconsistencies. For example, WT5-base has a better NLE quality than CAGE on Cos-E (0.55 vs. 0.43 e-ViL score; see below), but eKnowIA detected more inconsistencies for WT5-base than for CAGE (0.95 vs. 0.42 success rate). Examples of generated In-NLEs are in Table 3. More examples are in Tables 10–12 in Appendix A.7. We observe that the In-NLEs usually contradict common sense, which is aligned with previous studies showing that language models, used as pre-trained components in the NLE models, often suffer from factual incorrectness (Mielke et al., 2020; Zhang et al., 2021).
## 3 Our Know **Method For Alleviating** Inconsistencies
Our approach for alleviating inconsistencies in NLE models consists of two steps: (1) extraction of knowledge related to the input and (2) knowledge injection.
Extracting related knowledge. We leverage a knowledge extraction heuristic proposed by Xu et al. (2021) as follows:
1. Extract entities from an input's context part.
2. Find all knowledge triplets that contain the
entities.
3. For each entity, calculate a weight sj for each extracted triplet as:
$$s_{j}=w_{j}\times N/N_{r_{j}}{\mathrm{~and~}}N=\sum_{j=1}^{K}N_{r_{j}},$$
where wj is the weight of the j-th triplet predefined by the knowledge base (e.g., ConceptNet), Nrj is the number of extracted triplets of the relation rj for the given instance, and K is the total number of triplets containing the entity for the given instance.
4. For each entity, extract the triplet with the highest score.
Grounding with the extracted knowledge. After extracting the triplet with the highest weight per entity in an instance, we transform each of them into natural language and concatenate them to the instance. We use "Context:" as a separator between the input and the triplets. We leverage the templates that transform a relation into free-text (e.g., IsA to
"is a") from Petroni et al. (2019).
## 3.1 **Experiments**
We apply our KNOW approach to NILE, CAGE,
and WT5-base, and name them KnowNILE,
KnowCAGE, and KnowWT5-base, respectively.
Inconsistencies. The results in Table 1 show that grounding in commonsense knowledge diminishes the number of In-NLEs for all models and tasks.
The KNOW models defended against 58% of the examples attacked by eKnowIA. Also, we observed that, among the inconsistent examples of KNOW
models, 20% of them on average were newly introduced instances. Examples that failed to be defended, as well as newly introduced In-NLEs are provided in Tables 15-16 in Appendix A.7. Successfully defended examples are provided in Table 4. More successfully defended examples, nondefended examples, and newly attacked examples can be found in Tables 13-14 in Appendix A.7.
First, we highlight that a successfully defended example means that our eKNowIE attack did not find an adversarial instance together with which the KNOW model would form a pair of In-NLEs, while our attack did find at least one such adversarial instance for the original model. Second, we notice that even when the selected knowledge might not be the exact knowledge needed to label an instance correctly, the model can still benefit from this additional knowledge. For example, in the first sample in Table 14 in Appendix A.7, the most proper knowledge triplet would be {dog, DistinctFrom, bird}. However, despite the indirect knowledge given,2i.e., {dog, DistinctFrom, cat}, the model is able to defend the In-NLE by inferring that dogs are different from other animals. To examine whether the improved consistency of the KNOW models stems from *knowledge leakage* (using the same knowledge triplets in the mitigation method as in the attack), we calculate the overlap of triplets. On the e-SNLI dataset, we find that only 0.3% of knowledge triplets are reused for the attack on the KNOW models, and no overlap was found for the Cos-E dataset. This indicates that the leakage is not significant.
NLE quality. To evaluate the quality of generated NLEs, we conducted a human evaluation using Amazon MTurk, as automatic evaluation metrics only weakly reflect human judgements (Kayser et al., 2021). We follow the setup from Kayser et al.
(2021): we asked annotators (three per instance)
to judge whether the generated NLEs justify the answer with four options: {no, weak no, *weak yes*,
yes} and calculated the e-ViL score by mapping them to {0, 1/3, 2/3, 1}, respectively. Details of the human evaluation are in Appendix A.5. In Table 1, the KNOW models show similar NLE quality to their original counterparts, suggesting that our KNOW method preserves NLE quality while decreasing inconsistencies. Similar results are observed on the automatic evaluation of NLEs (see Appendix A.6).
## 4 **Related Work**
A growing number of works focus on building NLE
models in different areas such as natural language inference (Camburu et al., 2018), question answering (Narang et al., 2020), visual-textual reasoning
(Hendricks et al., 2018; Kayser et al., 2021; Majumder et al., 2022), medical imaging (Kayser et al.,
2022), self-driving cars (Kim et al., 2018), and offensiveness classification (Sap et al., 2019). Most commonly, the performance of these models is assessed only in terms of how plausible the reasons provided by their NLEs are. To our knowledge, Camburu et al. (2020) is the only work to investigate inconsistencies in NLEs. We improve their adversarial attack as well as bring an approach to alleviate inconsistencies. Works have also been conducted to analyse and make dialogue models generate responses consistent with the dialogue history (Zhang et al., 2018; Welleck et al., 2019; Li et al., 2020). However, these works are difficult to be applied to NLE models, in part because they require specific auxiliary datasets, such as pairs of inconsistent sentences. Other works investigated the logical consistency of a model's predictions (Elazar et al., 2021; Mitchell et al., 2022; Kumar and Joshi, 2022; Lin and Ng, 2022), but would not have straightforward extensions for investigating NLEs inconsistencies. Besides consistency, NLEs can also be assessed for their faithfulness w.r.t. the decision-making process of the model that they aim to explain (Wiegreffe et al.,
2021; Atanasova et al., 2023).
## 5 **Summary And Outlook**
We proposed the eKnowIA attack, which is more generalizable, successful, and faster than the previous eIA attack in detecting In-NLEs. Our experiments show that current NLE models generate a significant number of In-NLEs, and that higher NLE
quality does not necessarily imply fewer inconsistencies. We also introduced a simple but efficient method that grounds a model into relevant knowledge, decreasing the number of In-NLEs. Our work paves the way for further work on detecting and alleviating inconsistencies in NLE models.
## Limitations
Our eKnowIA attack contains logical rules designed specifically for the English language. While these rules may apply or be adapted to other languages with simple morphology, there could be languages in which completely new rules may be needed. Both our attack and the KNOW method rely on knowledge bases, which may sometimes be noisy. We employed manual efforts to eliminate (a small number of) noisy triples from ConceptNet.
Our attack also relies on a manual annotation to ensure that the adversarial inputs are natural (estimated to be the case 81.5% of the time). Finally, we were not able to test our methods on instances with long text, as we are not aware of datasets with NLEs for long text inputs or long NLEs.
## Acknowledgements
This work was partially supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1, by the AXA Research Fund, and by the EU TAILOR grant 952215. Oana-Maria Camburu was supported by a Leverhulme Early Career Fellowship. We also acknowledge the use of Oxford's ARC facility, of the EPSRC-funded Tier 2 facility JADE II (EP/ T022205/1), and of GPU computing support by Scan Computers International Ltd.
## References
Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, and Isabelle Augenstein. 2023. Faithfulness Tests for Natural Language Explanations. In ACL.
Satanjeev Banerjee and Alon Lavie. 2005. METEOR:
An automatic metric for MT evaluation with improved correlation with human judgments. In ACL
Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *EMNLP*.
Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural language inference with natural language explanations. In *NeurIPS*, volume 31.
Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, and Phil Blunsom. 2020. Make up your mind! Adversarial generation of inconsistent natural language explanations. In ACL.
Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models.
arXiv preprint arXiv:2102.01017.
Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. 2018. Grounding visual explanations. In *ECCV*.
Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, and Thomas Lukasiewicz. 2021. e-ViL: A
dataset and benchmark for natural language explanations in vision-language tasks. In *ICCV*.
Maxime Kayser, Cornelius Emde, Oana-Maria Camburu, Guy Parsons, Bartlomiej Papiez, and Thomas Lukasiewicz. 2022. Explaining chest x-ray pathologies in natural language. In *MICCAI*, pages 701–
713.
Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, and Zeynep Akata. 2018. Textual explanations for self-driving vehicles. In *ECCV*.
Ashutosh Kumar and Aditya Joshi. 2022. Striking a balance: Alleviating inconsistency in pre-trained models for symmetric classification tasks. In *Findings of ACL*, pages 1887–1895.
Sawan Kumar and Partha Talukdar. 2020. NILE: Natural language inference with faithful natural language explanations. In ACL.
Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, and Jason Weston. 2020. Don't say that! Making inconsistent dialogue unlikely with unlikelihood training. In ACL.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*.
Ruixi Lin and Hwee Tou Ng. 2022. Does BERT know that the IS-a relation is transitive? In ACL, pages 94–99.
Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *ICLR*.
Bodhisattwa Prasad Majumder, Oana-Maria Camburu, Thomas Lukasiewicz, and Julian McAuley. 2022.
Rationale-inspired natural language explanations with commonsense. In *ICML*.
Ana Marasovic, Chandra Bhagavatula, Jae sung Park, ´
Ronan Le Bras, Noah A. Smith, and Yejin Choi.
2020. Natural language rationales with full-stack visual reasoning: From pixels to semantic frames to commonsense graphs. In *Findings of EMNLP*.
Ana Marasovic, Iz Beltagy, Doug Downey, and ´
Matthew E. Peters. 2022. Few-shot selfrationalization with natural language prompts.
In *Findings of NAACL*.
Sabrina J. Mielke, Arthur Szlam, Y-Lan Boureau, and Emily Dinan. 2020. Linguistic calibration through metacognition: Aligning dialogue agent responses with expected correctness. *arXiv preprint* arXiv:2012.14983.
Eric Mitchell, Joseph J. Noh, Siyan Li, William S.
Armstrong, Ananth Agarwal, Patrick Liu, Chelsea Finn, and Christopher D. Manning. 2022. Enhancing self-consistency and performance of pre-trained language models through natural language inference.
arXiv preprint arXiv:2211.11875.
Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020.
WT5?! Training text-to-text models to explain their predictions. *arXiv preprint arXiv:2004.14546*.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In ACL.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *EMNLP-IJCNLP*.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *JMLR*, 21.
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself!
Leveraging language models for commonsense reasoning. In ACL.
Steven V. Rouse. 2019. Reliability of MTurk data from masters and workers. *Journal of Individual Differences*.
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2019.
Social bias frames: Reasoning about social and power implications of language. *arXiv preprint* arXiv:1911.03891.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
ConceptNet 5.5: An open multilingual graph of general knowledge. In *AAAI*.
Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In ACL.
Sarah Wiegreffe and Ana Marasovic. 2021. Teach me to explain: A review of datasets for explainable NLP. In *NeurIPS*, volume 35.
Sarah Wiegreffe, Ana Marasovic, and Noah A. Smith.
2021. Measuring association between labels and free-text rationales. In *EMNLP*.
Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, and Xuedong Huang. 2021. Fusing context into knowledge graph for commonsense question answering. In *Findings of ACL*.
Yordan Yordanov, Vid Kocijan, Thomas Lukasiewicz, and Oana-Maria Camburu. 2022. Few-Shot Out-ofDomain Transfer of Natural Language Explanations.
In *Findings of EMNLP*.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In ACL.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. BERTScore:
Evaluating text generation with BERT. In *ICLR*.
Weifeng Zhang, Jing Yu, Wenhong Zhao, and Chuan Ran. 2021. DMRFNet: Deep multimodal reasoning and fusion for visual question answering and explanation generation. *Information Fusion*, 72:70–79.
## A **Appendix** A.1 **Implementation Details**
We implemented the WT5-base model based on the HuggingFace transformers package3and replicated performance close to the reported results (see Section A.3). For the other models, we used the implementations provided by the respective authors.
A single Titan X GPU was used.
## A.2 Training Revexpl
We adopted T5-base (Raffel et al., 2020) for training the reverse explainer (REVEXPL). We trained the model for 30 epochs with a batch size of 8.
For efficient training, early stopping was applied if the validation loss increases for 10 consecutive logging steps, which were set to 30,000 iterations.
The dropout ratio was set to 0.1. We used the AdamW optimiser (Loshchilov and Hutter, 2018)
with learning rate 1e−4and epsilon 1e−8. We also used gradient clipping to a maximum norm of 1.0 and a linear learning rate schedule decaying from 5e−5.
For Cos-E, we used 10% of the training data as the validation set, and the original validation set as the test set.
## A.3 **Wt5-Base Performance Replication**
This section describes the performance of our trained WT5-base model. We report the accuracy for measuring the performance on the natural language inference (NLI) and CQA tasks. To automatically evaluate the quality of generated NLEs, we use the BLEU score (Papineni et al., 2002),
ROUGE (Lin, 2004), Meteor (Banerjee and Lavie, 2005), and the BERT score (Zhang et al., 2020),
which are widely used automatic evaluation metrics. The results are summarised in Table 5. In terms of accuracy and BLEU score, our replication performs better than originally reported for Cos-E,
but produced slightly lower results for e-SNLI.
| Acc. BLEU R-1 R-2 R-L Meteor BERT-S | | | | | | |
|---------------------------------------|------|------|---------------------|---------------|------|------|
| e-SNLI | ours | 90.6 | 28.4 45.8 22.5 40.6 | 33.7 | 89.8 | |
| reported 90.9 | 32.4 | - | - | - | - | - |
| Cos-E | ours | 65.3 | 7.3 | 25.0 8.3 21.6 | 20.2 | 86.3 |
| reported 59.4 | 4.6 | - | - | - | - | - |
Table 5: Performance of our implementation of WT5base on e-SNLI and Cos-E. The notations R-1, R2, R-L, and BERT-S denote ROUGE-1, ROUGE-2, ROUGE-L score, and BERT-Score, respectively.
## A.4 **Naturalness Evaluation Of The Generated** Variable Parts
It could be unfair to consider that a model generates inconsistent NLEs if the adversarial variable parts are unnatural. Hence, we manually evaluated 50 random samples of generated adversarial variable parts for each model (or all samples when there were less than 50 pairs of inconsistencies found).
On e-SNLI, we observe that, on average, 81.5%
(± 1.91) of the reverse variable parts were natural instances, i.e., semantically valid and not contradicting commonsense. The specific figures for each e-SNLI model were 80%, 80%, 84%, and 82% for KnowNILE, NILE, WT5, and KnowWT5, respectively. We adapted the results in Table 1 to reflect the number of inconsistencies caused only by natural variable parts.
For the Cos-E dataset, we considered that the variable parts (the two incorrect answer choices)
are unnatural if (1) the answer choices are stopwords of the NLTK package or (2) the correct answer is repeated. We observed only one unnatural case for KnowWT5 and WT5, respectively, and none for the other two models. We eliminated the two cases from the counts.
## A.5 **Design Of Human Evaluation Process For** Assessing Nle Quality
For the human evaluation, we sampled 200 generated NLEs for each model. Three Anglophone annotators are employed per instance. We selected annotators with a Lifetime HITs acceptance rate of at least 98% and an accepted number of HITs greater than 1,000. However, it is widely known that the quality of MTurk annotation is not guaranteed even for Master workers (Rouse, 2019). When we used the e-ViL evaluation framework off-theshelf (Kayser et al., 2021), we found that many workers do annotations without due consideration by simply checking "yes" in most cases. We also initially obtained an inter-annotator agreement captured by Fleiss's Kappa (K) of only 0.06 on average for Cos-E, which casted doubt on the quality of the evaluation. This prompted us to add a quality control measure to the evaluation framework. We carefully collected *trusted examples* where the quality of the NLEs is objectively "yes" or "no". For each HIT consisting of 10 examples, we incorporated in random locations two trusted examples with the correct answers being "yes" and "no", respectively.
After annotation, we discarded the HITs where
| Model | e-SNLI | Cos-E | | | | |
|-----------------------------------|----------|---------|----|------|----|----|
| e-ViL W/Yes W/No e-ViL W/Yes W/No | | | | | | |
| CAGE | - | - | - | 0.43 | 46 | 54 |
| KnowCAGE | - | - | - | 0.44 | 47 | 53 |
| NILE | 0.80 | 83 | 17 | - | - | - |
| KnowNILE | 0.82 | 86 | 14 | - | - | - |
| WT5 | 0.76 | 80 | 20 | 0.55 | 55 | 45 |
| KnowWT5 | 0.80 | 84 | 16 | 0.56 | 57 | 43 |
| BLEU R-1 R-2 R-L Meteor BERT-S | | | | | | | |
|----------------------------------|---------------------|---------------------|------|------|-----|------|------|
| WT5-base | 28.4 45.8 22.5 40.6 | 33.7 | 89.8 | | | | |
| KnowWT5-base 30.6 48.2 24.6 43.0 | 38.0 | 90.5 | | | | | |
| e-SNLI | NILE | 22.3 41.7 18.7 36.3 | 30.2 | 90.0 | | | |
| KnowNILE | 22.4 42.0 18.9 36.5 | 30.5 | 90.1 | | | | |
| WT5-base | 7.3 | 25.0 8.3 21.6 | 20.2 | 86.3 | | | |
| KnowWT5-base | 7.9 | 26.7 9.6 22.9 | 21.8 | 86.7 | | | |
| Cos-E | CAGE | 3.0 | 9.7 | 1.1 | 9.0 | 6.3 | 85.1 |
| KnowCAGE | 3.0 | 9.8 | 1.0 | 9.0 | 6.4 | 85.1 | |
the annotators gave a wrong answer for any of the trusted examples (we consider correct a "weak yes" answer for a "yes" trusted example and a "weak no" for a "no" trusted example). We repeated this process until the number of rejected HITs was fewer than 15% of the total HITs. We achieved an increased K value of 0.46 and 0.34 for e-SNLI and Cos-E, respectively, from 0.35 and 0.06 (without trusted examples). Similar levels of K as ours were obtained in other studies, such as (Marasovic et al. ´ ,
2022; Yordanov et al., 2022).
## A.6 **Quality Evaluation On The Generated** Nles
Table 6 shows the detailed results of human evaluation on the quality of generated NLEs. In addition to the e-ViL score, we followed the evaluation method of Marasovic et al. ´ (2020) by merging weak no and *weak yes* to no and yes, respectively, and reporting the ratios of *w/yes* and *w/no*. Also, the results of the automatic evaluation metrics are provided in Table 7. The results show that all the Know-models show similar or better results than their original counterparts.
| Subject | Relation | Object |
|------------|--------------|----------|
| men | Antonym | humans |
| man | Antonym | person |
| woman | Antonym | person |
| people | Antonym | person |
| flower | DistinctFrom | plant |
| politician | Antonym | man |
| children | Antonym | people |
## A.7 **Examples**
| ORIGINAL EXPLANATION | REVERSE EXPLANATION |
|----------------------------------------------------|-------------------------------------------------------|
| Not all men are teaching science. | Not all men are teaching biology. |
| A dog is not a car. | A dog is not a bike. |
| The boy is not necessarily looking at another boy. | The boy is not necessarily looking at another female. |
| A child is not a man. | A child is not a wife. |
| A bird is not a squirrel. | A bird is not a moose. |
| A group of dogs is not a woman. | A group of dogs is not a person. |
Table 9: Examples where both the original and reverse NLEs contain negation expressions. These NLEs are not contradictory with each other.
| PREMISE: Two hussars sit perched on horses, dressed in extravagant ceremonial wear, each holding a sabre in their right hand, reigns to the horse in their left. | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------|
| HYPOTHESIS: There are professional riders at a ceremony. | HYPOTHESIS: Two amateur riders are riding horses. |
| PREDICTED LABEL: Entailment | PREDICTED LABEL: Entailment |
| EXPLANATION: Hussars are professional riders. | EXPLANATION: Hussars are amateur riders. |
| PREMISE: A cheerleader in a tight red and white uniform is passing out white t-shirts at a sporting event. | |
| HYPOTHESIS: A player passes out hotdogs. | HYPOTHESIS: A player is passing out shirts. |
| PREDICTED LABEL: Contradiction | PREDICTED LABEL: Entailment |
| EXPLANATION: A cheerleader is not a player. | EXPLANATION: A cheerleader is a player. |
| PREMISE: Two people using a water buffalo to cultivate a watery field. | |
| HYPOTHESIS: Two people are outside with animals. | HYPOTHESIS: Two people are using a plant. |
| PREDICTED LABEL: Entailment | PREDICTED LABEL: Entailment |
| EXPLANATION: A water buffalo is an animal. | EXPLANATION: A water buffalo is a plant. |
| QUESTION: Crabs live in what sort of environment? | |
| CHOICES: bodies of water, saltwater, galapagos | CHOICES: bodies of earth, saltwater, atlantic ocean |
| PREDICTED ANSWER: bodies of water | PREDICTED ANSWER: bodies of earth |
| EXPLANATION: Crabs live in bodies of water. | EXPLANATION: Crabs live in bodies of earth. |
| QUESTION: The piece of paper was worth a lot of money, it was an old Apple Inc what? | |
| CHOICES: stock certificate, copy machine, ream | CHOICES: stock certificate, piece of stone, book |
| PREDICTED ANSWER: stock certificate | PREDICTED ANSWER: stock certificate |
| EXPLANATION: A stock certificate is the only thing | EXPLANATION: A stock certificate is the only thing |
| that is not a piece of paper. | that is a piece of paper. |
| QUESTION: When a person admits his mistakes, what are they doing? | |
| CHOICES: act responsibly, learn to swim, feel relieved | CHOICES: act responsibly, think critically, act irresponsibly |
| PREDICTED ANSWER: act responsibly | PREDICTED ANSWER: act irresponsibly |
| EXPLANATION: when a person admits his mistakes, | EXPLANATION: when a person admits his mistakes, |
| they act responsibly. | they act irresponsibly. |
Table 10: Examples of inconsistent NLEs detected by eKnowIA for the WT5-base model on e-SNLI and Cos-E.
The first column shows the original variable part and the second column shows the adversarial one.
| PREMISE: These are young adults who seem to be working together to protect the plants surrounding the white pole. HYPOTHESIS: The adults are young. HYPOTHESIS: The young adults are underdeveloped. PREDICTED LABEL: Entailment PREDICTED LABEL: Entailment EXPLANATION: Young adults are young. EXPLANATION: Young adults are seniors. PREMISE: A young boy with close-cropped hair, wearing a red robe, is holding a black kettle as someone is about to pour something in it. HYPOTHESIS: a small boy is holding a kettle that will soon be filled HYPOTHESIS: The small boy is not underdeveloped. PREDICTED LABEL: Entailment PREDICTED LABEL: Entailment EXPLANATION: A young boy is a small boy. EXPLANATION: A young boy is a small female. PREMISE: A group of people in a room, most with computers, some raising their hands. HYPOTHESIS: Some people have questions. HYPOTHESIS: A group of people are asking questions. PREDICTED LABEL: Neutral PREDICTED LABEL: Contradiction EXPLANATION: People raising their hands are not EXPLANATION: People raising their hands are asking asking questions. questions. |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Table 11: Examples of inconsistent NLEs detected by our eKnowIA attack for the NILE model on e-SNLI. The first column shows the original hypothesis, and the second one shows the adversarial hypothesis from our attack.
| QUESTION: A good interview after applying for a job may cause you to feel what? | |
|-----------------------------------------------------------------------------------|----------------------------------------------------------|
| CHOICES: hope, income, offer | CHOICES: hope, resentment, fear |
| PREDICTED ANSWER: hope | REVERSE ANSWER: hope |
| EXPLANATION: hope is the only thing that would cause you | EXPLANATION: hope is the only thing that would cause you |
| to feel hope. | to feel fear. |
| QUESTION: What does a stove do to the place that it's in? | |
| CHOICES: warm room, brown meat, gas or electric | CHOICES: warm room, cook food, heat the outside |
| PREDICTED ANSWER: warm room | REVERSE ANSWER: heat the outside |
| EXPLANATION: a stove heats the room. | EXPLANATION: a stove heats the outside. |
Table 12: Examples of inconsistent NLEs detected by our eKnowIA attack for the CAGE model on Cos-E. The first column shows the original hypothesis, and the second column shows the adversarial hypothesis from our attack.
| PREMISE: A dog standing near snow looking at water. | |
|--------------------------------------------------------------|----------------------------------------|
| HYPOTHESIS: A bird is standing near snow. | HYPOTHESIS: A bird is near water. |
| PREDICTED LABEL: contradiction | PREDICTED LABEL: entailment |
| EXPLANATION: A dog is not a bird. | EXPLANATION: A dog is a bird. |
| QUESTION: What is a person who is good at sports considered? | |
| CHOICES: talented, affluent, reproduce | CHOICES: talented, untalented, good at |
| PREDICTED LABEL: talented | PREDICTED LABEL: untalented |
| EXPLANATION: a person who is good at sports is considered talented. | EXPLANATION: a person who is good at sports is considered untalented |
Table 13: Examples of inconsistent NLEs detected by eKnowIA for NILE on e-SNLI and WT5 on Cos-E. The first column shows the original variable part, and the second column shows the adversarial one.
| PREMISE: A dog standing near snow looking at water. HYPOTHESIS: A bird is near water. | |
|-----------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------|
| HYPOTHESIS: A bird is standing near snow. | HYPOTHESIS: A bird is near water. |
| PREDICTED LABEL: contradiction | PREDICTED LABEL: neutral |
| EXPLANATION: A dog is not a bird. | EXPLANATION: A dog looking at water does not imply a bird is near water. |
| EXTRACTED KNOWLEDGE: {snow, RelatedTo, water}, | EXTRACTED KNOWLEDGE: {snow, RelatedTo, water}, |
| {dog, DistinctFrom, cat} | {dog, DistinctFrom, cat} |
| QUESTION: What is a person who is good at sports considered? CHOICES: talented, untalented, good at | |
| CHOICES: talented, affluent, reproduce | CHOICES: talented, untalented, good at |
| PREDICTED LABEL: talented | PREDICTED LABEL: talented |
| EXPLANATION: a person who is good at sports is considered talented. | EXPLANATION: a person who is good at sports is considered talented. |
| EXTRACTED KNOWLEDGE: {talent, RelatedTo, sports} | EXTRACTED KNOWLEDGE: {talent, RelatedTo, sports} |
Table 14: Examples of successfully defended instances by the KnowNILE model on e-SNLI and by the KnowWT5 model on Cos-E. This table should be read together with Table 13 to appreciate the defence.
| MODEL: KnowWT5 | |
|--------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|
| PREMISE: A young family enjoys feeling ocean waves lap at their feet. ORIGINAL ADVERSARIAL | |
| HYPOTHESIS: A family is at the beach. | HYPOTHESIS: The family is not at the beach. |
| PREDICTED LABEL: entailment | PREDICTED LABEL: entailment |
| EXPLANATION: Ocean waves lap at their feet implies that | EXPLANATION: Ocean waves lap at their feet implies that |
| they are at the beach. | they are not at the beach. EXTRACTED KNOWLEDGE: {feet, HasA, fingerprints} |
| MODEL: KnowNILE | |
| PREMISE: Some dogs are running on a deserted beach. | |
| ORIGINAL | ADVERSARIAL |
| HYPOTHESIS: There are multiple dogs present. | HYPOTHESIS: There are not many dogs on the beach. |
| PREDICTED LABEL: entailment | PREDICTED LABEL: entailment |
| EXPLANATION: Some dogs implies multiple dogs. | EXPLANATION: Some dogs implies not many dogs. EXTRACTED KNOWLEDGE: {dog, DistinctFrom, cat} |
| MODEL: KnowCAGE | |
| QUESTION: What does the sky do before a rain? | |
| CHOICES: cloud over, rain water, appear blue | CHOICES: cloud over, appear blue, appear green |
| PREDICTED LABEL: appear blue | PREDICTED LABEL: appear green |
| EXPLANATION: the sky appears blue before a rain | EXPLANATION: the sky appears green before a rain EXTRACTED KNOWLEDGE: {sky, UsedFor, rain} |
Table 15: Examples of inconsistent NLEs detected by eKnowIA but not defended by Know-models. The extracted knowledge triplets are not highly related to generating correct explanations.
| PREMISE: The collie is standing outdoors on a sandy area. | | |
|--------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------|
| ORIGINAL | ADVERSARIAL | |
| HYPOTHESIS: The collie is standing in the sand. | HYPOTHESIS: The collie is standing on stone. | |
| PREDICTED LABEL: entailment | PREDICTED LABEL: entailment | |
| EXPLANATION: A sandy area is made of sand. | EXPLANATION: A sandy area is made of stone. EXTRACTED KNOWLEDGE: {sand, RelatedTo, rock} | |
| MODEL: KnowNILE | | |
| PREMISE: Coach talks with football player, other players and crowd in background. ORIGINAL ADVERSARIAL | | |
| HYPOTHESIS: | A football player is climbing into the | HYPOTHESIS: A football player talks to a crowd. |
| stands at a game. PREDICTED LABEL: contradiction | PREDICTED LABEL: entailment | |
| EXPLANATION: A coach is not a football player. | EXPLANATION: A coach is a football player. EXTRACTED KNOWLEDGE: {crowd, IsA, gathering}, {player, PartOf, team}, {football player, DerivedFrom, football} | |
Table 16: Examples of newly detected instances with inconsistent NLEs by eKnowIA for the KNOW models. The extracted knowledge triplets exhibit low relevance and confuse the model to generate incorrect explanations.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We discuss our limitations in the "Limitations" section.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1. Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Sections 3 And 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.1.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sections 2 and 4.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Table 2.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4, Appendix A.1 A. 4.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Appendix A.5.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix A.5.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix A.5.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 4.1. and Appendix A.5.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Appendix A.5. |
sun-etal-2023-measuring | Measuring the Effect of Influential Messages on Varying Personas | https://aclanthology.org/2023.acl-short.48 | Predicting how a user responds to news events enables important applications such as allowing intelligent agents or content producers to estimate the effect on different communities and revise unreleased messages to prevent unexpected bad outcomes such as social conflict and moral injury. We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona (characterizing an individual or a group) might have upon seeing a news message. Compared to the previous efforts which only predict generic comments to news, the proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response. This enables more accurate and comprehensive inference on the mental state of the persona. Meanwhile, the generated sentiment dimensions make the evaluation and application more reliable. We create the first benchmark dataset, which consists of 13,357 responses to 3,847 news headlines from Twitter. We further evaluate the SOTA neural language models with our dataset. The empirical results suggest that the included persona attributes are helpful for the performance of all response dimensions. Our analysis shows that the best-performing models are capable of predicting responses that are consistent with the personas, and as a byproduct, the task formulation also enables many interesting applications in the analysis of social network groups and their opinions, such as the discovery of extreme opinion groups. | # Measuring The Effect Of Influential Messages On Varying Personas
Chenkai Sun♠, Jinning Li♠, Hou Pong Chan♡, ChengXiang Zhai♠**, and Heng Ji**♠
♠University of Illinois Urbana-Champaign
♡Faculty of Science and Technology, University of Macau
♠{chenkai5, jinning4, czhai, hengji}@illinois.edu
♡[email protected]
## Abstract
Predicting how a user responds to news events enables important applications such as allowing intelligent agents or content producers to estimate the effect on different communities and revise unreleased messages to prevent unexpected bad outcomes such as social conflict and moral injury. We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona (characterizing an individual or a group) might have upon seeing a news message. Compared to the previous efforts which only predict generic comments to news, the proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response. This enables more accurate and comprehensive inference on the mental state of the persona. Meanwhile, the generated sentiment dimensions make the evaluation and application more reliable. We create the first benchmark dataset, which consists of 13,357 responses to 3,847 news headlines from Twitter. We further evaluate the SOTA neural language models with our dataset. The empirical results suggest that the included persona attributes are helpful for the performance of all response dimensions. Our analysis shows that the best-performing models are capable of predicting responses that are consistent with the personas, and as a byproduct, the task formulation also enables many interesting applications in the analysis of social network groups and their opinions, such as the discovery of extreme opinion groups.
## 1 Introduction
To prevent the flooding of misinformation and hate speech on the internet, a great amount of progress has been made toward identifying and filtering such content on social media using machine learning
![0_image_0.png](0_image_0.png)
Figure 1: An example illustrating the task. The input consists of persona attributes (e.g., historical activities and profile) and a news message. The model is asked to predict response in multiple dimensions.
models (Fung et al., 2021; Su et al., 2022; ElSherief et al., 2021; Sap et al., 2019). While directly creating message-level labels is a natural way to address the issue, it is equally important to measure the influence of the message on different viewers as a way to decide how to manage the publication of the messages.
Existing efforts (Lin and Chen, 2008; Giachanou et al., 2018; Yang et al., 2019; Artzi et al., 2012)
have made steps toward predicting population-level news response (e.g., predicting the most likely response to a news message), but neglected the importance of personas in measuring influence. According to Individual Differences Theory (Riley, 1959),
which proposes that individuals respond differently to the mass media according to their psychological needs, the same message can impact different population groups/personas in different ways. For example, a message claiming the honor of sacrificing others' lives for a religious goal might agitate people who are prone to agreeing with such messages.
It is therefore essential to consider personalization when inferring viewers' responses.
On the other hand, the previous approaches that Code Repository: https://github.com/chenkaisun/
response_forecasting 554
| Split | Train | Dev. | Test |
|-----------------------|---------|--------|--------|
| # Samples | 10,977 | 1,341 | 1,039 |
| # Headlines | 3,561 | 1,065 | 843 |
| # Users | 7,243 | 1,206 | 961 |
| Avg # Profile Tokens | 10.75 | 11.02 | 10.50 |
| Avg # Response Tokens | 12.33 | 12.2 | 11.87 |
| Avg # Headline Tokens | 19.79 | 19.82 | 19.72 |
Table 1: Summary statistics for the dataset.
predict text-level responses (Yang et al., 2019; Wu et al., 2021; Lu et al., 2022) have only used generation metrics for automatic evaluation, yet the same sentiment can be expressed in a multitude of ways, and text alignment metrics like BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) do not credit cases where the sentiments match but semantics do not align well. As a result, it is crucial to evaluate the sentiment dimensions of user responses.
We propose Response Forecasting on Personas for News Media, a task for measuring the influence of news media messages on viewers by predicting viewers' responses. In particular, the input consists of the news message and persona information
(e.g., user profile and history in our dataset), and we define response in terms of sentiment polarity, sentiment intensity, and textual response. While we include three categories in this work, many other interesting aspects can also be defined (e.g., change of attitude toward real-world entities) and we leave them to future work. Studying the problem of forecasting individual viewers' responses allows the creation of tools to assist analysts and online content producers to estimate the potential impact of messages on different communities, and sheds light on new applications such as automatically re-writing a message/email to achieve a communication goal (e.g., to obtain a positive response from the receiver). Furthermore, this new task also helps to understand associations between user attributes and emotional responses.
To construct a test bed for this task, we collect a dataset from Twitter consisting of 13,357 labeled responses to 3,847 news headlines from Twitter.
Using the corpus, we examine how state-of-the-art neural models work in our task. We find that the models can predict responses with reasonable accuracy yet still have a large room for improvement.
We also find that the best-performing models are capable of predicting responses that are consistent with the personas, indicating that the models may be used for many exciting applications such as the discovery of groups with different opinions.
## 2 Dataset Collection
In this section, we describe how we construct data from Twitter. Specifically, we used Twitter API1 to crawl news headlines and comments below each headline from CNN Breaking News2, which is one of the most popular news accounts on Twitter.
Preprocess. We collected news headlines and corresponding comments from CNN Breaking News between January 2017 and January 2019 and removed the comments that are over 50 tokens to avoid spamming. We stripped away HTML syntax tokens and normalized user reference with special tokens "@user".
## 2.1 Persona Data
We categorize the users who post comments as responders. To describe responders, we gathered various persona attributes from Twitter, including
(1) User Profile, which is a short paragraph describing the user, and (2) User History, which are tweets written directly by the user. We consider persona as a representation of an individual or a community that characterizes interests and beliefs. User profiles and history serve as effective indicators of persona, as they reveal such information well. Since users' behavior is generally influenced by their personas, we can potentially infer personas by analyzing data that reflects their behavior. Additionally, studying historical tweets helps us understand users' communication styles. To ensure that future posting activities are not included when predicting the comment, we collect the historical posts prior to the earliest data sample in our dataset for each individual user.
## 2.2 Annotation
We obtained 14k headline and comment pairs from preprocessing. In the annotation stage, we collect labels for sentiment intensity and polarity of comments based on the context of the headline. For the 10k training instances, we produce automatic labels using deep-learning models trained on existing message-level datasets. More specifically, we train a Deberta-based model (He et al., 2020) using data from SemEval-2018 Task 13(Mohammad et al.,
2018), reaching over 85% Pearson correlation. We then proceed to use crowd-sourcing to annotate the remaining 2k samples as our evaluation set.
1developer.twitter.com/en/docs/twitter-api 2twitter.com/cnnbrk 3https://competitions.codalab.org/
competitions/17751
Textual Response ϕint ϕp
Name BLEU BScore Meteor R-1 R-L Avg. Len rs r MiF1 MaF1
Majority - - - - - - - - 43.41 20.18 Random - - - - - - 0.62 0.41 35.51 30.55
GPT2 1.59 -5.78 3.36 6.50 1.90 9.64 50.34 49.78 60.25 56.85
T5 6.95 -5.71 5.98 **10.40 2.70** 18.87 50.06 49.26 63.72 57.85
BART **8.17 -5.67 6.09** 9.90 2.50 21.05 62.03 61.82 **67.85 63.23**
BART w/o Profile 7.30 -5.70 5.91 10.00 2.50 19.47 57.95 58.20 67.28 62.26
BART w/o History 5.24 -5.88 4.41 7.70 1.50 18.62 48.80 48.63 59.00 53.29 BART w/o Both 3.90 -5.92 4.00 7.90 1.80 15.73 45.28 44.75 61.41 46.01
Task Setup. The annotation for the evaluation set is performed using the Amazon Mechanical Turk
(MTurk) crowd-sourcing platform. The workers were each asked to annotate a headline and comment pair with three workers assigned to each data sample. During the annotation, the annotator is asked to select the sentiment polarity label and the intensity of the sentiment based on their understanding of the input. The workers select positive, negative, or neutral for the sentiment polarity label and select on the integer scale of 0 to 3 for intensity.
415 workers participated in this task in total and all annotators are paid a fair wage above the federal minimum.
Quality Control. To ensure the quality of annotation, we allowed only the workers who have at least 95% approval rate and have had at least 5,000 hits approved to access our tasks. We further removed workers who have a <70% accuracy in the first 30 annotations and discarded the assignments that have completion time deviated from the expected average largely. We used majority voting to determine the final labels: if at least two annotators agreed on a label, we chose it as the final label. The resulting annotated samples achieve an inter-annotator agreement accuracy of 81.3%. We show the statistics of the dataset in Table 1.
## 3 Response Forecasting On Personas For News Media 3.1 Task Formulation
In this task, we aim to predict sentiment polarity, sentiment intensity, and textual response from an individual when the individual sees a message on news media. Formally, given persona P (represented by profile, or historical posts), and a source message M, the task is to predict the persona's sentiment polarity ϕp (i.e., Positive, Negative, *Neutral*)
and sentiment intensity ϕint (i.e., in the scale of 0 to Table 3: The table shows human evaluation results based on three consistency measures, supporting the automatic evaluation findings.
3), and textual expression t. Our goal is to encode P and produce ϕp, ϕint, and t at decoding time. We formulate the task as a conditional generation problem and use the following maximum-likelihood objective to train a generative model:
$$\sum_{i}^{N}\log p(O_{i}|O_{<i-1},{\mathcal{P}})$$
| Model | Persona | Label | Context |
|---------|-----------|---------|-----------|
| GPT2 | 3.18 | 3.84 | 2.84 |
| T5 | 3.68 | 4.23 | 3.57 |
| BART | 4.35 | 4.42 | 3.99 |
where O is the output string concatenating ϕp, ϕint, and t with special separator tokens.
## 3.2 Experimental Setup
For deep learning-based text generators, we finetune decoder-only text generator GPT2 (Radford et al., 2019) as well as two Encoder-Decoder models T5 (Raffel et al., 2019) and BART (Lewis et al.,
2019). Greedy decoding is used for all the models during training. We further perform ablation on the best-performing model by removing different user attributes. We further include two naive baselines, Random and *Majority*, for sentiment dimensions, where each prediction follows either the majority label or a random label. Our neural models are implemented using Pytorch (Paszke et al., 2019)
and Huggingface Transformers (Wolf et al., 2020).
The reproducibility and hyperparameter details can be found in Appendix Table 4.
## 3.2.1 Evaluation Metrics
Automatic. We use BARTScore (Yuan et al., 2021),
BLEU (Papineni et al., 2002) , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) to evaluate textual response generation performance. Note that BARTScore computes the log-likelihood of producing the reference text given the generated text using a BART model pretrained on ParaBank24. Furthermore, we use Pearson and Spearman correlation to evaluate sentiment intensity, and F1 to evaluate sentiment polarity.
Manual. We conduct human evaluation to measure the consistency of the generated outputs from those models. We define three types of consistency metrics: (1) *persona consistency*: whether the output reflects the persona's characteristics, (2) *label* consistency: whether the response text and sentiment are consistent with each other, (3) and *context* consistency: whether the output is responding to the input news headline. We randomly select 10 personas with distinct characteristics (i.e., the writing style/interest/profession do not clearly overlap)
and 10 news headlines from distinct topics, and consequently generate 100 responses using each model. The samples are distributed to 5 raters who score each output based on our metrics. The raters are master students who passed a small quiz of 20 samples with at least 80% accuracy. We additionally make sure that each rater is familiar with the persona information (e.g., profile and history)
before starting to work on the task.
## 3.3 Results
Automatic Evaluation. Across the metrics in Table 2, we can see that BART provides us with the highest quality response predictions on both sentiment and text levels. As expected, the performance of simple baselines is relatively low compared to other models, showing that the dataset does not have a class imbalance issue. While the automatic generation scores are generally low (i.e., words do not align well), the sentiment prediction scores are much higher in scale, demonstrating the importance of sentiment scoring to make a fair judgment of the result; the model needs to be credited for correctly predicting the latent sentiment even if it does not utter the exact sentence. Finally, we ablate user attribute features one by one. As shown in the table, not only both features included are effective for the task, but they are also complementary of each other.
Human Evaluation. The results from human judgments (Table 3) in general support the automatic 4https://github.com/neulab/BARTScore evaluation findings. Among all three models, our approach with BART reaches the highest on all metrics, showing it can generate responses of better quality than others. The difference between models on Label Consistency is noticeably lower than other metrics, and the number suggests that pretrained language models are capable of producing sentiment labels consistent with the textual expression.
On the other hand, we find that BART can produce responses more consistent with the controllable variables than GPT2, which might be attributed to its denoising pretraining (e.g., it adapts better to different modeling formats). In fact, the outputs show that GPT2 hallucinates more often than other models.
## 3.4 Application
We hypothesize that the formulation of the task enables the application of discovering groups with different opinions on issues. We verify the hypothesis by collecting personas with contrasting stances on an issue and generating responses based on this issue. We find that the output from the model stays consistent with the persona (examples are shown in the Appendix Table 5). The result demonstrates the potential for application on social network analysis.
Since the model is able to generalize to different personas or news, an analyst can therefore replace the news headline with others to segment the population based on different issues, or manually construct a persona to visualize how a person from a particular community would respond to certain issues.
## 4 Conclusions And Future Work
We propose Response Forecasting on Personas for News Media, a new task that tests the model's capability of estimating the responses from different personas. The task enables important applications such as estimating the effect of unreleased messages on different communities as an additional layer of defense against unsafe information (e.g.,
information that might cause conflict or moral injury). We also create the first dataset for evaluating this new task and present an evaluation of the stateof-the-art neural models. The empirical results show that the best-performing models are able to predict responses with reasonable accuracy and produce outputs that are consistent with the personas.
The analysis shows that the models are also able to generate contrasting opinions when conditioned on contrasting personas, demonstrating the feasibility of applying the models to discovering social groups with different opinions on issues for future work.
In addition to this, an intriguing avenue for further research lies in utilizing response forecasting techniques to predict the popularity of discussion threads, as explored in previous studies (He et al.,
2016; Chan and King, 2018).
## Limitations
While the training method makes use of user profile description and history, one additional factor that is important is the structure between users and news articles. Knowing a user's social circles can often give hints about the user's interests and beliefs, which can potentially help the model to infer how a particular persona would respond to an issue. A possible direction is to design a method that explores the social context features (e.g., social network) via graph-based algorithms.
## Ethics
During annotation, each worker was paid $15 per hour (converted to per assignment cost on MTurk).
If workers emailed us with any concerns, we responded to them within 1 hour. The research study has also been approved by the Institutional Review Board (IRB) and Ethics Review Board at the researchers' institution. Regarding privacy concerns our dataset may bring about, we follow the Twitter API's Terms of Use5and only redistribute content for non-commercial academic research only. We will release pointers to the tweets and user profiles in the dataset.
## Acknowledgement
This research is based upon work supported in part by U.S. DARPA INCAS Program No.
HR001121C0165. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government.
The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Hou Pong Chan was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/060/2022/AFJ,
5https://developer.twitter.com/en/
developer-terms/agreement-and-policy FDCT/0070/2022/AMJ) and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST).
## References
Yoav Artzi, Patrick Pantel, and Michael Gamon. 2012.
Predicting responses to microblog posts. In proceedings of the 2012 conference of the north American chapter of the Association for Computational Linguistics: human language technologies, pages 602–606.
Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72.
Hou Pong Chan and Irwin King. 2018. Thread popularity prediction and tracking with a permutationinvariant model. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3392–3401. Association for Computational Linguistics.
Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech.
Yi Fung, Christopher Thomas, Revanth Gangi Reddy, Sandeep Polisetty, Heng Ji, Shih-Fu Chang, Kathleen McKeown, Mohit Bansal, and Avirup Sil. 2021.
Infosurgeon: Cross-media fine-grained information consistency checking for fake news detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1683–
1698.
Anastasia Giachanou, Paolo Rosso, Ida Mele, and Fabio Crestani. 2018. Emotional influence prediction of news posts. In *Twelfth International AAAI Conference on Web and Social Media*.
Ji He, Mari Ostendorf, Xiaodong He, Jianshu Chen, Jianfeng Gao, Lihong Li, and Li Deng. 2016. Deep reinforcement learning with a combinatorial action space for predicting popular reddit threads. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016,*
Austin, Texas, USA, November 1-4, 2016, pages 1838–
1848. The Association for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. *arXiv preprint* arXiv:2006.03654.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81.
Kevin Hsin-Yih Lin and Hsin-Hsi Chen. 2008. Ranking reader emotions using pairwise loss minimization and emotional distribution regression. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 136–144, Honolulu, Hawaii. Association for Computational Linguistics.
Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han.
2019. On the variance of the adaptive learning rate and beyond. *arXiv preprint arXiv:1908.03265*.
Hongyuan Lu, Wai Lam, Hong Cheng, and Helen Meng.
2022. Partner personas generation for dialogue response generation. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 5200–5212.
Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation, pages 1–17.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *arXiv preprint arXiv:1910.10683*.
John W Riley. 1959. Mass communication and the social system.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics.
Ting Su, Craig Macdonald, and Iadh Ounis. 2022.
Leveraging users' social network embeddings for fake news detection on twitter.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yuwei Wu, Xuezhe Ma, and Diyi Yang. 2021. Personalized response generation via generative split memory network. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1956–1970, Online. Association for Computational Linguistics.
Ze Yang, Can Xu, Wei Wu, and Zhoujun Li. 2019. Read, attend and comment: A deep architecture for automatic news comment generation.
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
Bartscore: Evaluating generated text as text generation. *Advances in Neural Information Processing* Systems, 34:27263–27277.
## A Appendix A.1 Implementation Details
We implement the models using the 4.8.2 version of Huggingface Transformer library6(Wolf et al.,
2020). We use Oct 1, 2021 commit version of the BART-base model (139M parameters) from Huggingface7. We use Huggingface datasets8for automatic evaluation metrics. The BART Score comes from the author's repository9and we used the one trained on ParaBank2. The hyperparameters for the experiment are shown in Table 4 (applied to all models) and the ones not listed in the table are set to be default values from the transformer library. In order to make the distribution of training and development sets align, we used automatically-generated labels10 during training. We use RAdam (Liu et al.,
6https://github.com/huggingface/transformers 7https://huggingface.co/facebook/bart-base/
commit/ea0107eec489da9597e9eefd095eb691fcc7b4f9 8https://github.com/huggingface/datasets 9https://github.com/neulab/BARTScore 10https://huggingface.co/cardiffnlp/
twitter-roberta-base-sentiment-latest,https:
//competitions.codalab.org/competitions/17751
| Name | Value | | |
|-------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
| seed | 42 | | |
| learning rate | 5e-5 | | |
| batch size | 16 | | |
| weight decay | 5e-4 | | |
| RAdam epsilon | 1e-8 | | |
| RAdam betas | (0.9, 0.999) | | |
| scheduler | linear | | |
| warmup ratio (for scheduler) | 0.06 | | |
| number of epochs | 20 | | |
| metric for early stop | SacreBLEU11 | | |
| patience (for early stop) | 15 | Headline: Millions are under a blizzard warning as a powerful storm is expected to bring heavy snow, wind and rain to a large swath of the country | |
| length penalty | 1.2 | | |
| beam search size during eval | 5 | Purity & Love | Degradation |
| We're in the northern part of the country. Hope everyone is safe | | | |
| Table 4: Hyperparameters. The ones below the mid-line are generation related. | Mother Nature sure is pissed off at us | | |
| 2019) as the optimizer. We perform hyperparameter search on the batch size from {16, 32}, pretrained language model learning rate from {3e-5, 4e-5, 5e-5}. We perform our experiments on 32 GB V100. The experiments can take up to 15 hours. | Headline: Judge says Trump may have been urging supporters to 'do something more' than protest on Jan. 6 Pro-President Trump Anti-President Trump Hahahahahahaha! They figured that Trump would be impeachedby now! But the traitorous Republicans are slowing down the process. The liberal media & Dems are always negative when it comes to anything. They don't care about anything except themselves Headline: Russia and Ukraine are at war Pro-Russia Pro-Ukraine Support Russia Support Ukraine | | |
| Table 5: Tables showing different cases that contrasting | | | |
Table 5: Tables showing different cases that contrasting the persona (selected from existing ones) can lead to the generation of contrasting opinions on issues. For each table, the middle row contains different personas, and the third row contains the responses from each persona.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitation
✓ A2. Did you discuss any potential risks of your work?
Ethics
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Ethics
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Ethics
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Ethics
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? appendix D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
2
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
2
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
2, Ethics D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Ethics D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
wang-yu-2023-going | Going Beyond Sentence Embeddings: A Token-Level Matching Algorithm for Calculating Semantic Textual Similarity | https://aclanthology.org/2023.acl-short.49 | Semantic Textual Similarity (STS) measures the degree to which the underlying semantics of paired sentences are equivalent. State-of-the-art methods for STS task use language models to encode sentences into embeddings. However, these embeddings are limited in representing semantics because they mix all the semantic information together in fixed-length vectors, which are difficult to recover and lack explainability. This paper presents a token-level matching inference algorithm, which can be applied on top of any language model to improve its performance on STS task. Our method calculates pairwise token-level similarity and token matching scores, and then aggregates them with pretrained token weights to produce sentence similarity. Experimental results on seven STS datasets show that our method improves the performance of almost all language models, with up to 12.7{\%} gain in Spearman{'}s correlation. We also demonstrate that our method is highly explainable and computationally efficient. | # Going Beyond Sentence Embeddings: A Token-Level Matching Algorithm For Calculating Semantic Textual Similarity
Hongwei Wang and **Dong Yu**
Tencent AI Lab, Seattle, WA
{hongweiw, dyu}@global.tencent.com
## Abstract
Semantic Textual Similarity (STS) measures the degree to which the underlying semantics of paired sentences are equivalent. State-of-theart methods for STS task use language models to encode sentences into embeddings. However, these embeddings are limited in representing semantics because they mix all the semantic information together in fixed-length vectors, which are difficult to recover and lack explainability. This paper presents a token-level matching inference algorithm, which can be applied on top of any language model to improve its performance on STS task. Our method calculates pairwise token-level similarity and token matching scores, and then aggregates them with pretrained token weights to produce sentence similarity. Experimental results on seven STS
datasets show that our method improves the performance of almost all language models, with up to 12.7% gain in Spearman's correlation.
We also demonstrate that our method is highly explainable and computationally efficient.
## 1 Introduction
Measuring the similarity between two sentences is an important task in many natural language processing (NLP) applications. This makes Semantic Textual Similarity (STS) a crucial preliminary step in various domains, such as information retrieval
(Wang et al., 2020), machine translation (Castillo and Estrella, 2012), plagiarism detection (Foltynek `
et al., 2019), semantic search (Mangold, 2007), and conversational systems (Santos et al., 2020).
Large pretrained language models (Devlin et al.,
2018; Liu et al., 2019) have achieved the stateof-the-art performance on STS task (Reimers and Gurevych, 2019; Gao et al., 2021; Chuang et al., 2022). These approaches typically use language models to encode input sentences into embeddings and then calculate STS using similarity metrics such as the cosine function. However, sentence embeddings have limitations in representing sentences, as all the information of the sentence is aggregated and mixed together in the fixed-length embedding.
This problem is especially pronounced for the STS
task, which requires fine-grained, low-level semantic understanding and comparison (Majumder et al.,
2016). As a result, methods based on sentence embeddings often have difficulty being well-trained and lack explainability for their predicted results.
Going beyond sentence embeddings, we propose a token-level matching algorithm for STS. Our algorithm works in the inference stage, so it can be applied on top of any trained language model to improve its performance. Specifically, given a trained language model (also called base model), we use it to generate token embeddings for the two input sentences and calculate their pairwise token similarity. We then design a novel scoring function to calculate the matching score for each token. The sentence similarity score is calculated by averaging all the token matching scores with token weights, which are learned unsupervisedly from a large corpus. Our method captures fine-grained, token-level information, which is more indicative, robust, and explainable than sentence embeddings.
We conducted experiments on seven standard STS datasets using six language models and their variants as base models. Our method is able to improve the performance of almost all existing language models, especially those "poor" ones (up to 12.7% improvement in Spearman's correlation). Specifically, our model improves SimCSE by 0.8% to 2.2%, and improves ESimCSE by 0.6% to 1.2%,
which is the current state-of-the-art model on the STS task. We also demonstrated the explainability of our model by identifying the semantically similar parts between two input sentences.
## 2 Related Work
Existing work on STS can be broadly divided into two categories: lexicon-based and semanticbased. Lexicon-based approaches (Richardson and 563 Smeaton, 1995; Niwattanakul et al., 2013; Opitz et al., 2021) calculate the correlation between the character streams of two sentences being compared, which can be applied at the level of characters or words. Semantic-based approaches can be further divided into three categories: word-based methods (Wang et al., 2016), which treat a sentence as a list of words and compare the correlations between words; structure-based methods, which use language tools such as grammar (Lee et al., 2014),
part-of-speech (Batanovic and Boji ´ c´, 2015), and word order (Li et al., 2004) to process sentences and compare their structure; and vector-based methods
(Reimers and Gurevych, 2019; Liu et al., 2021; Gao et al., 2021; Wu et al., 2021; Chuang et al., 2022),
which calculate sentence embeddings that describe each sentence as a vector and have achieved the state-of-the-art performance on STS.
Our method is conceptually similar to BERTScore (Zhang et al., 2019), a token-level evaluation metric for text generation. However, there are two significant differences between these two approaches: (1) BERTScore is an evaluation metric, while our method is an algorithm for calculating STS; (2) The key designs for token matching score and token weights are also different.
## 3 The Proposed Method
Given a pair of two sentences s = ⟨t1, t2, · · · , t|s|⟩
and sˆ = ⟨tˆ1,tˆ2, *· · ·* ,tˆ|sˆ|⟩ where ti (tˆi) is the i-th token in sentence s (sˆ), our goal is to learn a function f(s, sˆ) ∈ R that calculates the semantic similarity between s and sˆ.
Token-Level Similarity Matrix We can calculate token embeddings for s and sˆ using any language model, including pretrained language models (Devlin et al., 2018; Liu et al., 2019), or language models specifically finetuned for the STS task (Li et al., 2020; Gao et al., 2021; Chuang et al., 2022).
Given sentence s and sˆ, the language model generates the token embedding matrix X ∈ R|s|×d and Xˆ ∈ R|sˆ|×d, where each row corresponds to a d-dimension token embedding. The token-level similarity matrix for s and sˆ is then calculated as S = XXˆ ⊤, in which the entry Sij indicates the similarity between token ti and tˆj .
Token Matching Score The token matching score measures the likelihood that a given token in one sentence can be matched to a token in the other sentence. This score takes into account two aspects:
(1) *significance*. Similar to BERTScore (Zhang et al., 2019), we match a token to its most similar token in the other sentence. For example, the significance score of ti ∈ s is sig(ti) = maxtˆj∈sˆSij .
(2) *uniqueness*. It is important to note that a high score for sig(ti) does not necessarily mean that ti can be matched to a certain token in sˆ, but rather that Sij is high for all tj ∈ sˆ. To measure how unique sig(ti) is, we define the uniqueness score of ti as uni(ti) = maxtˆj∈sˆSij −2nd-maxtˆj∈sˆSij ,
i.e., the difference between the maximum and the second maximum value of row Si·. We provide an ablation study on the two parts in our experiments.
The token matching score is defined as the sum of the above two scores:
$$\begin{array}{r c l}{{}}&{{S(t_{i})=s i g(t_{i})+u n i(t_{i})}}\\ {{}}&{{=2\cdot\operatorname*{max}_{\hat{t}_{j}\in\hat{s}}\mathbf{S}_{i j}-2\mathrm{nd}\text{-}\operatorname*{max}_{\hat{t}_{j}\in\hat{s}}\mathbf{S}_{i j}.}}\end{array}\tag{1}$$ Similarly, for $\hat{t}_{j}\in\hat{s}$, we have $S(\hat{t}_{j})=2\cdot\operatorname*{max}_{t_{i}\in s}\mathbf{S}_{i j}-2\mathrm{nd}\text{-}\operatorname*{max}_{t_{i}\in s}\mathbf{S}_{i j}$.
Token Weighting Tokens typically have different levels of semantic importance. Previous work
(Zhang et al., 2019) uses inverse document frequency (IDF) as token weights, as rare words can be more indicative than common words. However, in many cases, high-frequency words can be semantically important (e.g., "not") while low-frequency words may be semantically unimportant (e.g., specific numbers). To address the mismatch between token importance and token frequency, we propose learning token weights from plain texts.
Specifically, we choose unsupervised SimCSE
(Gao et al., 2021) as the training model, which takes an input sentence and predicts itself in a contrastive objective with dropout used as noise. During the training stage of SimCSE, instead of using the lastlayer embedding of CLS token as the sentence embedding, we assign a trainable weight parameter wi for each token i in the vocabulary and calculate the weighted average Pi∈s witi as the sentence embedding for s, where tiis the last-layer embedding of token i ∈ s. In this way, the token weights w can be trained together with the model parameters of SimCSE on a large unsupervised corpus, which has been shown to be more semantically precise than frequency-based token weights.
The final STS score of the input sentences (s, sˆ)
is the weighted average of all token scores:
$$f(s,{\hat{s}})={\frac{\sum_{t_{i}\in s}S(t_{i})w_{t_{i}}}{2\sum_{t_{i}\in s}w_{t_{i}}}}+{\frac{\sum_{{\hat{t}}_{i}\in{\hat{s}}}S({\hat{t}}_{i})w_{{\hat{t}}_{i}}}{2\sum_{{\hat{t}}_{i}\in{\hat{s}}}w_{{\hat{t}}_{i}}}}.\,\,\,(2)$$
| Model | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. gain |
|----------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-------------|
| Sentence-BERTbase | 67.6/70.3 | 70.0/77.4 | 67.4/74.5 | 75.8/80.4 | 69.8/76.0 | 73.1/78.2 | 70.4/73.8 | +5.2 |
| Sentence-BERTlarge | 71.1/73.7 | 70.1/78.1 | 70.2/76.3 | 77.9/82.4 | 73.8/79.2 | 77.2/81.1 | 71.4/75.5 | +4.9 |
| ConSERTbase | 65.9/65.7 | 73.6/81.6 | 65.9/72.7 | 75.5/81.4 | 74.0/80.3 | 73.0/78.0 | 65.9/68.2 | +4.9 |
| ConSERTlarge | 69.9/71.6 | 82.1/85.6 | 71.4/76.2 | 82.1/84.2 | 77.0/80.8 | 76.9/81.5 | 71.1/71.7 | +3.0 |
| Mirror-BERTbase | 59.7/63.7 | 59.3/80.0 | 53.4/71.1 | 68.8/79.9 | 62.5/78.4 | 57.9/76.1 | 67.7/69.3 | +12.7 |
| Mirror-RoBERTabase | 65.3/67.8 | 80.5/81.8 | 72.0/73.7 | 79.7/80.9 | 77.4/79.1 | 77.6/79.7 | 70.0/70.5 | +1.6 |
| SimCSE-BERTbase | 68.4/67.4 | 82.4/84.9 | 74.4/76.7 | 80.9/83.3 | 78.6/82.4 | 76.9/82.5 | 72.2/72.3 | +2.2 |
| SimCSE-BERTlarge | 70.9/70.8 | 84.2/86.8 | 76.4/79.1 | 84.5/85.9 | 79.8/83.4 | 79.3/84.4 | 73.9/72.8 | +2.0 |
| SimCSE-RoBERTabase | 70.2/71.1 | 81.8/82.5 | 73.2/74.8 | 81.4/82.2 | 80.7/81.7 | 80.2/82.0 | 68.6/69.6 | +1.1 |
| SimCSE-RoBERTalarge | 72.9/73.5 | 84.0/84.8 | 75.6/76.8 | 84.8/85.2 | 81.8/82.5 | 82.0/82.9 | 71.3/72.0 | +0.8 |
| ESimCSE-BERTbase | 73.4/69.9 | 83.3/85.7 | 77.3/77.8 | 82.7/84.2 | 78.8/82.4 | 80.2/82.9 | 72.3/72.8 | +1.1 |
| ESimCSE-BERTlarge | 73.2/72.6 | 85.4/86.8 | 77.7/79.5 | 84.3/85.5 | 78.9/82.2 | 80.7/84.0 | 74.9/73.1 | +1.2 |
| ESimCSE-RoBERTabase | 69.9/71.2 | 82.5/83.0 | 74.7/76.0 | 83.2/83.7 | 80.3/81.6 | 81.1/82.7 | 70.6/71.5 | +1.1 |
| ESimCSE-RoBERTalarge | 73.2/73.8 | 84.9/85.4 | 76.9/77.8 | 84.9/85.4 | 81.2/81.9 | 82.8/83.4 | 72.3/72.9 | +0.6 |
| DiffCSE-BERTbase | 72.3/66.5 | 84.4/83.7 | 76.5/75.5 | 83.9/83.0 | 80.5/80.6 | 80.6/80.7 | 71.2/70.0 | -1.3 |
| DiffCSE-RoBERTabase | 70.1/70.9 | 83.4/83.1 | 75.5/76.0 | 82.8/82.6 | 82.1/82.8 | 82.4/83.6 | 71.2/72.2 | +0.5 |
Table 1: Spearman's correlation results (in %) on seven STS datasets. The numbers before "/" are the results of the original models, and the numbers after "/" are the results of applying our method on top of the original model. The higher number is highlighted.
## 4 Experiments 4.1 Evaluation Setup
We evaluate our method on seven STS datasets:
STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS Benchmark (Cer et al., 2017),
and SICK-Relatedness (Marelli et al., 2014). Each dataset consists of sentence pairs and their corresponding ground-truth similarity scores. We use Spearman's correlation to evaluate the predicted results of our method and all baseline methods on the test set. Baseline methods include Sentence-BERT (Reimers and Gurevych, 2019),
ConSERT (Yan et al., 2021), Mirror-BERT (Liu et al., 2021), SimCSE (Gao et al., 2021), ESimCSE (Wu et al., 2021), and DiffCSE (Chuang et al.,
2022). We use the pretrained models released by the authors as our base model, then compare the performance of our method with them, as shown in Table 2. The Sentence-BERT and ConSERT
models were downloaded from https://github.
com/yym6472/ConSERT, while the other pretrained models can be directly loaded by their names using HuggingFace API. We use the last hidden layer representation of the [CLS] token as the sentence embedding, because it performs much better than the representation after the pooling layer in almost all cases.
## 4.2 Main Result
Table 1 shows the Spearman's correlation results on the seven STS datasets. In each entry, the number
| Model | Name |
|---------------------|----------------------------------|
| Sentence-BERTbase | sup-sbert-base |
| Sentence-BERTlarge | sup-sbert-large |
| ConSERTbase | unsup-consert-base |
| ConSERTlarge | unsup-consert-large |
| Mirror-BERTbase | cambridgeltl/mirror-bert-baseuncased-sentence |
| Mirror-BERTlarge | cambridgeltl/mirror-bert-largeuncased-sentence |
| SimCSE-BERTbase | princeton-nlp/unsup-simcse-bertbase-uncased |
| SimCSE-BERTlarge | princeton-nlp/unsup-simcse-bertlarge-uncased |
| SimCSE-RoBERTabase | princeton-nlp/unsup-simcseroberta-base |
| SimCSE-RoBERTalarge | princeton-nlp/unsup-simcseroberta-large |
| ESimCSE-BERTbase | ffgcc/esimcse-bert-base-uncased |
| ESimCSE-BERTlarge | ffgcc/esimcse-bert-large-uncased |
| ESimCSE-RoBERTabase ffgcc/esimcse-roberta-base ESimCSE-RoBERTalarge ffgcc/esimcse-roberta-large DiffCSE-RoBERTabase voidism/diffcse-bert-baseuncased-sts DiffCSE-RoBERTalarge voidism/diffcse-roberta-base-sts Table 2: Base models and their names. | |
before "/" is the result of the original model (using the embedding of the CLS token as the sentence embedding), while the number after "/" is the result of applying our method on top of the original model. The last column shows the average absolute gain of our method compared to the baseline method across all tasks.
Our method can improve the results for almost all models. The improvement is particularly signif-
| Variant | STS-B | SICK-R |
|---------------------------------------------------|---------|----------|
| TOKEN MATCHING FUNCTION 2·max−2nd-max (our model) | 82.0 | 69.6 |
| max | 81.6 | 69.3 |
| max−2nd-max | 68.1 | 56.0 |
| TOKEN WEIGHTS Pretrained weights (our model) | 82.0 | 69.6 |
| IDF weights | 79.9 | 67.5 |
| No weights | 80.2 | 68.6 |
| max, IDF weights (BERTScore) | 79.9 | 67.8 |
icant if the original model does not perform well, e.g., Sentence-BERT (+5.2% and +4.9%) and ConSERT (+4.9% and +3.0%). From another perspective, our method can be seen as a universal booster for language models on the STS task. For example, our method can improve the Spearman's correlation of all base models to around 80 or even higher on STS-B dataset, regardless of the original performance of the base model. This indicates that even "poor" language models can still generate high-quality token embeddings that preserve token similarity information very well. However, existing language models only use a single embedding to represent a sentence, which mixes all the information of the sentence together and makes it difficult for language models to be well-trained.
## 4.3 Ablation Study
We investigate the impact of different token matching functions and token weights, which are two key components of our method. The base model here is SimCSE-RoBERTabase, but the conclusion is similar for other base models. The results are reported in Table 3. For the token matching function, we find that the performance slightly drops when using only max and substantially drops when using only max−2nd-max . This suggests that significance is more important in measuring token matching scores, while considering uniqueness further improves the performance. For token weights, we observe that IDF weights do not perform well and are even worse than the variant with no token weights.
We also evaluate the variant of max + IDF weights, which is the same design as BERTScore (Zhang et al., 2019). Our model outperforms BERTScore by around 2% on both datasets.
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
## 4.4 Running Time Analysis
We investigate the running time of our method.
We set the base model as SimCSE-RoBERTabase or SimCSE-RoBERTalarge, and then run the original inference method and our inference method on STS-B dataset with batch size ranging from 4 to 128 on an Nvidia Tesla P40 GPU. The results, shown in Figure 1, indicate that our method only incurs an average time overhead of 12.9% and 9.1%
on the two base models, respectively.
## 4.5 Case Study
As a case study, we consider a sentence pair from STS-B dataset: "a man is performing a card trick" and "a man is doing trick with play cards", whose ground truth similarity is the highest level. The token similarity matrix for this pair is shown in Figure 2, with a dark/light blue background indicating the first/second highest scores in each row, and bold/black numbers indicating the first/second highest scores in each column.
Exactly matched tokens ("a", "man", "is", and
"trick") receive the highest scores, which are indicated by color dark green. Tokens that cannot be matched ("a", "with", and "play") receive the lowest scores, which are indicated by color light green. Tokens that are not exactly the same but are semantically equivalent ("performing"-"doing",
"card"-"cards") receive scores that fall in the middle level. While using only max (i.e. significance)
as the token matching score may produce similar result, adding the term max−2nd-max (i.e. uniqueness) improves the reliability and distinguishability of those scores. This is why our model performs slightly better than max, as shown in Table 3.
Additionally, Figure 2 demonstrates that pretrained token weights are more accurate than IDF
token weights. Some semantically unimportant tokens, such as "a", "is", and "with", are given too much weight when using IDF method, which affects the overall accuracy of the prediction. As a result, the predicted STS using pretrained token weights (0.967) is also more accurate than using IDF token weights (0.959).
## 5 Conclusion
This paper presents a token-level matching algorithm for calculating STS between pairs of sentences. Unlike previous approaches that use pretrained language models to encode sentences into embeddings, our method calculates the pairwise token similarity, and then applies a token matching functions to these scores. The resulting scores are averaged with pretrained token weights to produce the final sentence similarity. Our model consistently improves the performance of existing language models and is also highly explainable, with minimal extra time overhead during inference.
## Limitations
Our model does not follow existing sentence embedding models that encode sentences into embeddings. Therefore, one limitation of our method is that it is specifically designed for STS task (or more precisely, sentence comparison task) and cannot be easily transferred to other tasks, such as sentence classification.
Additionally, our approach incurs a slight extra time overhead of approximately 10%, which may be unacceptable for applications that require high time efficiency.
Our method only takes into account the semantic comparison of individual tokens, rather than considering the meaning of combinations of tokens or phrases. A possible direction for future work is to incorporate the consideration of compositional semantics, for example by grouping tokens into phrases and applying a similar phrase-level matching algorithm.
## References
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, et al. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In *Proceedings of the 9th international* workshop on semantic evaluation (SemEval 2015),
pages 252–263.
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel M
Cer, Mona T Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In *SemEval@ COLING*,
pages 81–91.
Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez Agirre, Rada Mihalcea, German Rigau Claramunt, and Janyce Wiebe. 2016. Semeval2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In *SemEval-2016.*
10th International Workshop on Semantic Evaluation; 2016 Jun 16-17; San Diego, CA. Stroudsburg
(PA): ACL; 2016. p. 497-511. ACL (Association for Computational Linguistics).
Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A
pilot on semantic textual similarity. In * SEM 2012:
The First Joint Conference on Lexical and Computational Semantics–Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385– 393.
Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. * sem 2013 shared task: Semantic textual similarity. In Second joint conference on lexical and computational semantics
(* SEM), volume 1: proceedings of the Main conference and the shared task: semantic textual similarity, pages 32–43.
Vuk Batanovic and Dragan Boji ´ c. 2015. Using part-of- ´
speech tags as deep-syntax indicators in determining short-text semantic similarity. *Computer Science and* Information Systems, 12(1):1–31.
Julio Castillo and Paula Estrella. 2012. Semantic textual similarity for mt evaluation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 52–58.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. *arXiv preprint* arXiv:1708.00055.
Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljaciˇ c, Shang- ´
Wen Li, Wen-tau Yih, Yoon Kim, and James Glass. 2022. Diffcse: Difference-based contrastive learning for sentence embeddings. *arXiv preprint* arXiv:2204.10298.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Tomáš Foltynek, Norman Meuschke, and Bela Gipp. `
2019. Academic plagiarism detection: a systematic literature review. *ACM Computing Surveys (CSUR)*,
52(6):1–42.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
Simcse: Simple contrastive learning of sentence embeddings. *arXiv preprint arXiv:2104.08821*.
Ming Che Lee, Jia Wei Chang, and Tung Cheng Hsieh.
2014. A grammar-based semantic similarity algorithm for natural language sentences. The Scientific World Journal, 2014.
Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. *arXiv* preprint arXiv:2011.05864.
Yuhua Li, Zuhair Bandar, David McLean, James O'shea, et al. 2004. A method for measuring sentence similarity and iits application to conversational agents. In FLAIRS Conference, pages 820–825.
Fangyu Liu, Ivan Vulic, Anna Korhonen, and Nigel ´
Collier. 2021. Fast, effective, and self-supervised:
Transforming masked language models into universal lexical and sentence encoders. arXiv preprint arXiv:2104.08027.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Goutam Majumder, Partha Pakray, Alexander Gelbukh, and David Pinto. 2016. Semantic textual similarity methods, tools, and applications: A survey. *Computación y Sistemas*, 20(4):647–665.
Christoph Mangold. 2007. A survey and classification of semantic search approaches. International Journal of Metadata, Semantics and Ontologies, 2(1):23–34.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A sick cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 216–
223.
Suphakit Niwattanakul, Jatsada Singthongchai, Ekkachai Naenudorn, and Supachanun Wanapu.
2013. Using of jaccard coefficient for keywords similarity. In *Proceedings of the international multiconference of engineers and computer scientists*,
volume 1, pages 380–384.
Juri Opitz, Angel Daza, and Anette Frank. 2021.
Weisfeiler-leman in the bamboo: Novel amr graph metrics and a benchmark for amr graph similarity.
Transactions of the Association for Computational Linguistics, 9:1425–1441.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
arXiv preprint arXiv:1908.10084.
Ray Richardson and Alan F Smeaton. 1995. Using wordnet in a knowledge-based approach to information retrieval.
José Santos, Ana Alves, and Hugo Gonçalo Oliveira.
2020. Leveraging on semantic textual similarity for developing a portuguese dialogue system. In *International Conference on Computational Processing of* the Portuguese Language, pages 131–142. Springer.
Yanshan Wang, Sunyang Fu, Feichen Shen, Sam Henry, Ozlem Uzuner, Hongfang Liu, et al. 2020. The 2019 n2c2/ohnlp track on clinical semantic textual similarity: overview. *JMIR medical informatics*,
8(11):e23375.
Zhiguo Wang, Haitao Mi, and Abraham Ittycheriah.
2016. Sentence similarity learning by lexical decomposition and composition. arXiv preprint arXiv:1602.07019.
Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2021. Esimcse:
Enhanced sample building method for contrastive learning of unsupervised sentence embedding. arXiv preprint arXiv:2109.04380.
Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Consert: A contrastive framework for self-supervised sentence representation transfer. *arXiv preprint arXiv:2105.11741*.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q
Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. *arXiv preprint* arXiv:1904.09675.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
✓ A2. Did you discuss any potential risks of your work?
6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1 and 2
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhu-etal-2023-robust | Robust Learning for Multi-party Addressee Recognition with Discrete Addressee Codebook | https://aclanthology.org/2023.acl-short.50 | Addressee recognition aims to identify addressees in multi-party conversations. While state-of-the-art addressee recognition models have achieved promising performance, they still suffer from the issue of robustness when applied in real-world scenes. When exposed to a noisy environment, these models regard the noise as input and identify the addressee in a pre-given addressee closed set, while the addressees of the noise do not belong to this closed set, thus leading to the wrong identification of addressee. To this end, we propose a Robust Addressee Recognition (RAR) method, which discrete the addressees into a character codebook, making it able to represent open set addressees and robust in a noisy environment. Experimental results show that the introduction of the addressee character codebook helps to represent the open set addressees and highly improves the robustness of addressee recognition even if the input is noise. | # Robust Learning For Multi-Party Addressee Recognition With Discrete Addressee Codebook
Pengcheng Zhu, Wei Zhou, Kuncai Zhang, Yuankai Ma, Haiqing Chen Alibaba Group
{tangju.zpc, fayi.zw, kuncai.zkc, yuankai.myk, haiqing.chenhq}@alibaba-inc.com
## Abstract
Addressee recognition aims to identify addressees in multi-party conversations. While state-of-the-art addressee recognition models have achieved promising performance, they still suffer from the issue of robustness when applied in real-world scenes. When exposed to a noisy environment, these models regard the noise as input and identify the addressee in a pre-given addressee closed set, while the addressees of the noise do not belong to this closed set, thus leading to the wrong identification of addressee. To this end, we propose a Robust Addressee Recognition Model (RARM), which discretizes the addressees into a codebook, making it able to represent addressees in the noise and robust in a noisy environment. Experimental results show that the introduction of the addressee codebook helps to represent the addressees in the noise and highly improves the robustness of addressee recognition even if the input is noise.
## 1 Introduction
Different from two-party conversation, multiparty conversation has more than two interlocutors (Traum, 2003; Uthus and Aha, 2013; Meng et al., 2018; Gu et al., 2021). Beyond response generation or selection (Hu et al., 2019; Liu et al., 2019; Gu et al., 2020; Wang et al., 2020b), there is also a need for recognizing the addressee of the multi-party conversation (Ouchi and Tsuboi, 2016; Zhang et al., 2018; Le et al., 2019).
Addressee recognition aims to identify the interlocutors indicate to whom they are speaking.
Ouchi and Tsuboi (2016) formalize the task as given a context to predict an addressee, the system is required to select an addressee appearing in the previous context. Meng et al. (2018) realize the importance of speaker modeling and propose speaker classification as a surrogate task for general speaker modeling. Zhang et al. (2018) use
![0_image_0.png](0_image_0.png)
a novel dialogue encoder to update speaker embeddings in a role-sensitive way. Le et al. (2019)
not only focuses on predicting the addressee of the last utterance but also aims to predict all the missing addressees. Gu et al. (2021) propose a unified multi-party pretrain model and design five selfsupervised tasks based on the interactions among utterances and interlocutors.
These works suppose that the multi-party conversation happens in a quiet environment, which can lead to serious system failure when exposed to a noisy environment. Many other works recently focus on robust learning in practice (Wang et al., 2020a; Xue et al., 2020; Liu et al., 2021; Wang et al., 2022). However, these robust learning works mainly focus on two-party conversations and introduce noises by replacing, inserting, swapping, and deleting characters at the word level or words at the sentence level. The main difference between two-party conversation and multiparty conversation is that two-party conversation mainly focuses on perturbations at the semantic level, while beyond semantic perturbation, the multi-party conversation should consider the perturbations that are not intended for the current conversation, even if the noise is semantically complete. As shown in Figure 1, the noise is semantically complete but doesn't belong to the current conversation.
571 Since the number of addressees in a noisy environment is unknowable, giving a fixed length of the addressee matrix is not feasible. On account of the above issues, we propose the Robust Addressee Recognition Model (RARM), which discretizes the addressees into a codebook and represents addressees by addressee codes. We evaluate our method on two types of addressee noise: indomain addressee noise (ID-AN) and out-domain addressee noise (OD-AN). The ID-AN is the noise that has the same domain as the current multiparty conversation, and OD-AN is the noise that doesn't have.
The main contributions are as follows: (1) We formalize the task of Robust Addressee Recognition (RAR) task in multi-party conversation and propose the Robust Addressee Recognition Model (RARM), which discretizes the addressees into a codebook, making addressee recognition robust in a noisy environment. (2) We conduct experiments on two types of noise: in-domain and out-domain noise, experimental results show that the addressee codebook helps to represent the addressees in noise effectively and highly improves the robustness of addressee recognition even if the input is in-domain or out-domain noise.
## 2 Methods 2.1 Task Definition
We follow Ouchi and Tsuboi (2016) to define the addressee recognition. Given a multi-party conversation S, the task is to select an addressee for the last utterance q in the candidate set A.
$$\begin{array}{r l}{\mathbf{\partial}\colon}&{{}S=(q,C)}\\ {\vdots}&{{}{\hat{a}}\in A}\end{array}$$
$$G I V E N$$ $$P R E D I C T$$
GIV EN : S = (*q, C*) (1)
P REDICT : ˆa ∈ A (2)
where C is context. When considering noise N,
the formulation of robust addressee recognition is updated as:
$$\begin{array}{r l}{{\vdots}}&{{S=(q,C)}}\\ {{\vdots}}&{{\hat{a}\;\in\{A,N\}}}\end{array}$$
$$G I V E N$$ $$P R E D I C T$$
GIV EN : S = (*q, C*) (3)
P REDICT : ˆa ∈ {*A, N*} (4)
## 2.2 Robust Addressee Recognition Model
One straightforward way to represent the noise is to add an extra vector in the addressee matrix, while it is too rough to represent all addressees in the noise into the same vector. In this section, we propose to utilize VQ-VAE (van den Oord et al.,
2017) to discretize addressees into a codebook.
![1_image_0.png](1_image_0.png)
There are three parts in the RARM: an encoder for query and context representation, a discrete addressee codebook for addressee representation, and a classifier for addressee classification. The model architecture is illustrated in Figure 2.
## 2.3 Encoder
We use Transformer (Vaswani et al., 2017) with 12 layers as Encoder, and the input is the concatenation of query q and context C with special token
'[SEP]'. The representation of the input sequence is defined as:
$$h=T r a n s f o r m e r(q,C)$$
$$({\boldsymbol{\varsigma}})$$
h = T ransformer(*q, C*) (5)
where h is the hidden states at the position of special token '[CLS]'.
## 2.4 Discrete Addressee Codebook
(1) (2) $\frac{1}{2}$
Addressee codebook is an embedding table e ∈
RK∗d where K is discrete latent variables size. We follow van den Oord et al. (2017) to discretize the addressee into the codebook as follows:
$$p(z_{1}|h)=\begin{cases}1\enspace i f\ k=\operatorname*{arg\,min}_{j}||h-e_{j}||^{2}\\ 0\enspace o t h e r w i s e\end{cases}$$
$$\mathrm{damping}\;e_{k}\;\mathrm{as}$$
$$\left(7\right)$$
$$\quad(6)$$
$$\begin{array}{c}{{(3)}}\\ {{(4)}}\end{array}$$
thus h is mapped onto the embedding ek as:
$$z_{1}=e_{k},\quad w h e r e\ k=\operatorname*{arg\,min}_{j}||h-e_{j}||^{2}$$
j||h − ej ||2(7)
In order to augment the representation of addresses, we discretize an addressee into a code set Z instead of one code. The difference between h and z1 is fed back to the discrete process and repeats the steps above n times as follows:
$$\begin{array}{r c l}{{h_{1}}}&{{=}}&{{h-z_{1}}}\\ {{z_{2}}}&{{=}}&{{D i s c r e t e(h_{1})}}\end{array}$$
$$\begin{array}{l}{(8)}\\ {(9)}\end{array}$$
| Types | Hu et al. (2019) | Ouchi and Tsuboi (2016) | | |
|----------|----------------------|---------------------------|------------------------|------------------------|
| length-5 | length-10 | length-15 | | |
| ID-AN | 93517 / 1500 / 1500 | 138336 / 8571 / 9800 | 148567 / 9292 / 10691 | 146943 / 9244 / 10615 |
| OD-AN | 93517 / 1500 / 1500 | 138336 / 8571 / 9800 | 148567 / 9292 / 10691 | 146943 / 9244 / 10615 |
| Overall | 311725 / 5000 / 5000 | 461120 / 28570 / 32668 | 495226 / 30974 / 35638 | 489812 / 30815 / 35385 |
| Models | Types | Hu et al. (2019) | Ouchi and Tsuboi (2016) | | |
|-------------------|-----------|--------------------|---------------------------|------|------|
| length-5 | length-10 | length-15 | | | |
| ID-AN | 59.3 | 51.3 | 46.6 | 46.2 | |
| BERT | OD-AN | 81.7 | 74.5 | 70.1 | 68.9 |
| Overall | 80.4 | 71.7 | 67.3 | 66.8 | |
| ID-AN | 64.6 | 56.1 | 53.3 | 51.8 | |
| MPC-BERT | OD-AN | 84.4 | 77.4 | 74.8 | 73.5 |
| Overall | 83.8 | 75.1 | 72.6 | 70.4 | |
| ID-AN | 61.3 | 53.8 | 50.4 | 48.6 | |
| RARM w/o codebook | OD-AN | 82.6 | 75.8 | 72.7 | 70.3 |
| Overall | 81.5 | 72.8 | 70.1 | 68.4 | |
| ID-AN | 65.9 | 57.7 | 54.5 | 52.9 | |
| RARM w/o AD loss | OD-AN | 85.1 | 79.4 | 75.8 | 74.1 |
| Overall | 84.5 | 76.3 | 72.7 | 71.1 | |
| ID-AN | 67.7 | 58.3 | 55.2 | 52.6 | |
| RARM | OD-AN | 86.4 | 79.2 | 77.6 | 74.7 |
| Overall | 85.1 | 76.9 | 73.8 | 71.5 | |
Table 1: The statistics of the constructed ID-AN, OD-AN, and Overall data in the dataset.
where *Discrete* means the discrete process in equation (6) and (7). Thus the final representation of an addressee is computed as:
$$Z=\left\{z_{1},z_{2},...,z_{n}\right\}$$
we set n = 3 in all experiments. Thus a resulting addressee is selected as follows:
$$Softmax\left(W([z_{1}:z_{2}:z_{3}])+b\right)\tag{11}$$ $$Argmax\left(P_{a}\right)\tag{12}$$ $$a\in\{A,N\}$$
$$\begin{array}{r l}{P_{a}}&{{}=}\\ {\hat{a}}&{{}=}\end{array}$$
aˆ = *Argmax*
## 2.5 Training
The RARM is trained with two losses as follows:
Classification loss with VQ-VAE aims to train the codebook and recognize addressees with the code. We follow (van den Oord et al., 2017) to train our model with loss defined as follows:
$$\begin{array}{c}{{l o s s_{v q}=-l o g p(y|Z)+||Z-s g[h]||_{2}^{2}}}\\ {{\qquad\qquad\qquad+\beta||h-s g[Z]||_{2}^{2}}}\end{array}\tag{13}$$
where sg stands for the stopgradient operator that has zero partial derivatives. The first term is classification loss. The middle term is the codebook
$$(10)$$
loss that optimizes the codebook embedding. Encoder is optimized by the first and the last term, and we set β = 0.25 in all experiments.
Addressee Discrete loss is utilized to discretize addressee into the codebook. Zhao et al. (2017)
has proved the effect of bag-of-words (BOW) loss on discrete latent variables, we define the addressee discrete loss as follows:
$$loss_{discrete}=-\sum_{i=0}^{y_{q}}logp(y_{i}|Z)\tag{14}$$ where $y_{q}$ is the query words set.
## 3 Experiment 3.1 Experimental Setups And Dataset
We evaluated our proposed methods on Ubuntu IRC benchmarks (Le et al., 2019) and (Ouchi and Tsuboi, 2016). We define two types of addressee noise: in-domain addressee noise (ID-AN) and out-domain addressee noise (OD-AN). The IDAN is the noise that has the same domain as the current multi-party conversation, and OD-AN is the noise that doesn't have.
![3_image_1.png](3_image_1.png)
![3_image_0.png](3_image_0.png)
For the construction of ID-AN, we replace the query by sampling a query in another conversation in the same Ubuntu benchmark. As for ODAN, we replace the query by sampling a query in the DailyDialog dataset (Li et al., 2017), which is a high-quality chit-chat dialog dataset. The statistics of the constructed ID-AN, OD-AN, and Overall data in the dataset are shown in Table 1. We follow the splitting strategy of Gu et al. (2021)
and set the clean / ID-AN / OD-AN at the ratio of 40%/30%/30% in train/dev/test set.
We set codebook size K to 200 and the dimension of embedding vector d to 768. The weight of the addressee discrete loss is 0.02. The checkpoint with the lowest loss on the validation set is selected for testing.
We compare RARM with baselines: (1) **BERT**
is a pretrain bidirectional transformers classification model with self-attention (Devlin et al., 2019).
(2) **MPC-BERT** is a pretrained language model for multi-party conversation understanding, which achieves SOTA performance in multi-party addressee recognition (Gu et al., 2021).
## 3.2 Automatic Evaluation
We follow Ouchi and Tsuboi (2016) to evaluate the task with accuracy, three types of results are listed in Table 2, ID-AN/OD-AN means performances only on ID-AN/OD-AN data, and Overall means overall performance on all data with IDAN/OD-AN.
As shown in the table, our proposed RARM
achieves the best performance compared with baselines. Though MPC-BERT achieves SOTA in the addressee recognition task (Gu et al., 2021), it fails to keep the robustness in in-domain and out-domain noisy data. We observe that the performance of ID-AN is much worse than OD-AN,
that's because in-domain noise is closer to the conversation compared to out-domain noise, it is hard to distinguish noise in the same domain.
Ablation study results are listed in Table 2. We find that the performance decreases significantly without codebook, demonstrating the importance of discrete of the addressee codebook. The performance drops without addressee discrete loss, mainly because BOW loss helps to represent discrete latent variables.
## 3.3 Analysis On Addressee Codebook
We sample 10 codes for visualization in Figure 3. We calculate the word embeddings that are close to the sampled codes by cosine similarity and visualize them in 3(a). We find that different codes represent different semantic clusters. To further study the meaning of each code, we sample and visualize five embeddings for each code in 3(b). The figure shows that code 34 (dots in blue) is close to
'ubuntu', 'linux', and 'microsoft', which represent the words related to the operating system. Similarly, code 90 (dots in purple) is close to 'CPU',
'disks', and 'memory', which are related to disk storage capacity.
## 3.4 Analysis On Addressee Representation
We randomly sample addressees in clean/IDAN/OD-AN data and visualize corresponding codes in Figure 4. We visualize the word embeddings that are close to the addressee codes in clean and OD-AN data in Figure 4(a). Since we conduct experiments on Ubuntu datasets, the addressee codes in clean data are discretized to Ubunturelated words, e.g., 'bug', 'upgrade', and 'package', while the correlation of addressee codes in OD-AN with Ubuntu is small, e.g., 'sunny' and 'seafood'. We find that it's easy to distinguish addressees in multi-party conversations from outdomain noise since they don't share the same
![4_image_0.png](4_image_0.png)
codes.
We visualize the word embeddings that are close to the addressee codes in clean and ID-AN data in Figure 4(b) and 4(c). The figures show that clean and ID-AN addressees share the same codes, e.g., code 7, because the ID-AN is also sampled from the Ubuntu IRC datasets, that is, in the same domain. Though it is difficult to distinguish the addressee in clean and ID-AN data at the code semantic level, we observe that the cosine similarity between codes in clean data is smaller than codes in ID-AN data. Code 22 and code 133 in 4(b) mainly represent 'version' and 'upgrade', we can easily infer that the addressee mainly discusses the problem of version upgrade. While code 34 and code 189 represent operating system and disk storage capacity respectively in 4(c), the correlation between code 34 and code 189 is relatively small.
## 4 Conclusion
In this paper, to improve the robustness of multiparty addressee recognition, we formalize the Robust Addressee Recognition (RAR) task and propose the Robust Addressee Recognition Model
(RARM), which discretizes the addressees into a codebook, making it able to represent addressees in noise. We evaluate our method in two types of addressee noise: ID-AN and OD-AN. Experimental results demonstrate that the addressee codebook helps to represent the addressees in noise effectively and highly improves the robustness of addressee recognition even if the input is indomain or out-domain noise.
## 5 Limitations
The main limitation is that the in-domain noise is hard to recognize in noisy multi-party conversations. Though our proposed RARM achieves the best performance compared to all baselines, we find that if the content of the noise is close to the multi-party conversation's content, the average accuracy of all methods is not high, how to improve the performance on these hard samples is worthy of further study.
## References
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171–4186.
Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware bert for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2041–2044.
Jia-Chen Gu, Chongyang Tao, Zhenhua Ling, Can Xu, Xiubo Geng, and Daxin Jiang. 2021. Mpc-bert: A
pre-trained language model for multi-party conversation understanding. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 3682–3692.
Wenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, and Rui Yan. 2019. Gsn: A
graph-structured network for multi-party dialogues.
In *In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence*,
page 5010–5016.
Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, and Rui Yan. 2019.
Who is speaking to whom? learning to identify utterance addressee in multi-party conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the
9th International Joint Conference on Natural Language Processing, pages 1909–1919.
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, pages 986–995.
Cao Liu, Kang Liu, Shizhu He, Zaiqing Nie, and Jun Zhao. 2019. Incorporating interlocutor-aware context into response generation on multi-party chatbots. In Proceedings of the 23rd Conference on Computational Natural Language Learning, pages 718–727.
Jiexi Liu, Ryuichi Takanobu, Jiaxin Wen, Dazhen Wan, Hongguang Li, Weiran Nie, Cheng Li, Wei Peng, and Minlie Huang. 2021. Robustness testing of language understanding in task-oriented dialog. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language* Processing, pages 2467–2480.
Zhao Meng, Lili Mou, and Zhi Jin. 2018. Towards neural speaker modeling in multi-party conversation:
The task, dataset, and models. In Proceedings of the Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, volume 32.
Hiroki Ouchi and Yuta Tsuboi. 2016. Addressee and response selection for multi-party conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2133–2143.
David Traum. 2003. Issues in multiparty dialogues.
In *Workshop on Agent Communication Languages*,
pages 201–211. Springer.
David C Uthus and David W Aha. 2013. Multiparticipant chat analysis: A survey. *Artificial Intelligence*,
pages 106–121.
Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. *Advances in neural information processing systems*, page 6306–6315.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing systems*, 30.
Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, and Hai Zhao. 2022. Distinguishing non-natural from natural adversarial samples for more robust pretrained language model. In *Findings of the Association for Computational Linguistics*, pages 905–915.
Longshaokan Wang, Maryam Fazel-Zarandi, Aditya Tiwari, Spyros Matsoukas, and Lazaros Polymenakos. 2020a. Data augmentation for training dialog models robust to speech recognition errors. In
Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 63–
70.
Weishi Wang, Steven CH Hoi, and Shafiq Joty. 2020b.
Response selection for multi-party conversations with dynamic topic tracking. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing, pages 6581–6591.
Haiyang Xue, Yang Feng, Shuhao Gu, and Wei Chen.
2020. Robust neural machine translation with asr errors. In *Proceedings of the First Workshop on Automatic Simultaneous Translation*, pages 15–23.
Rui Zhang, Honglak Lee, Lazaros Polymenakos, and Dragomir Radev. 2018. Addressee and response selection in multi-party conversations with speaker interaction rnns. In *Proceedings of the Association for* the Advancement of Artificial Intellige Conference on Artificial Intelligence, page 5690–5697.
Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi.
2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics*, pages 654–664.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 5
✓ A2. Did you discuss any potential risks of your work?
section 5
✓ A3. Do the abstract and introduction summarize the paper's main claims?
section Abstract
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 3.2 3.3 3.4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
section 3.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 3.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 3.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section 3.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
loakman-etal-2023-twistlist | {T}wist{L}ist: Resources and Baselines for Tongue Twister Generation | https://aclanthology.org/2023.acl-short.51 | Previous work in phonetically-grounded language generation has mainly focused on domains such as lyrics and poetry. In this paper, we present work on the generation of tongue twisters - a form of language that is required to be phonetically conditioned to maximise sound overlap, whilst maintaining semantic consistency with an input topic, and still being grammatically correct. We present TwistList, a large annotated dataset of tongue twisters, consisting of 2.1K+ human-authored examples. We additionally present several benchmark systems (referred to as TwisterMisters) for the proposed task of tongue twister generation, including models that both do and do not require training on in-domain data. We present the results of automatic and human evaluation to demonstrate the performance ofexisting mainstream pre-trained models in this task with limited (or no) task specific training and data, and no explicit phonetic knowledge. We find that the task of tongue twister generation is challenging for models under these conditions, yet some models are still capable of generating acceptable examples of this language type. | # Twistlist: Resources And Baselines For Tongue Twister Generation
Tyler Loakman1∗
, Chen Tang2∗ **and Chenghua Lin**1†
1Department of Computer Science, The University of Sheffield, UK
2Department of Computer Science, The University of Surrey, UK
{tcloakman1,c.lin}@sheffield.ac.uk [email protected]
## Abstract
Previous work in phonetically-grounded language generation has mainly focused on domains such as lyrics and poetry. In this paper, we present work on the generation of tongue twisters
- a form of language that is required to be phonetically conditioned to maximise sound overlap, whilst maintaining semantic consistency with an input topic, and still being grammatically correct. We present **TwistList**, a large annotated dataset of tongue twisters, consisting of 2.1K+
human-authored examples. We additionally present several benchmark systems (referred to as TwisterMisters) for the proposed task of tongue twister generation, including models that both do and do not require training on in-domain data.
We present the results of automatic and human evaluation to demonstrate the performance of existing mainstream pre-trained models in this task with limited (or no) task specific training and data, and no explicit phonetic knowledge. We find that the task of tongue twister generation is challenging for models under these conditions, yet some models are still capable of generating acceptable examples of this language type.
## 1 Introduction
Phonetically constrained language generation is a primary subarea of computational creativity in natural language generation (NLG), primarily encompassing lyric and poetry generation (Tian and Peng, 2022; Wöckener et al., 2021; Xue et al., 2021; Zhang et al.,
2020a; Agarwal and Kann, 2020), as well as pun generation (Sun et al., 2022; He et al., 2019; Yu et al.,
2018), and continues to prove challenging for myriad reasons. Primarily, such works require the inclusion of phonetic factors such as metre and rhyme, which involves careful consideration of candidate vocabulary on the syllable level, leading to a reduced pool of allowable vocabulary once these constraints are in place.
![0_image_0.png](0_image_0.png)
Figure 1: Tongue Twister Generation aims to generate an utterance with high levels of phonetic overlap, requiring understanding of semantics, grammar, and phonetics.
In this paper, we present work on the generation of *tongue twisters*, a type of phonetically constrained language that is rarely explored in the NLG community. As a form of creative generation, tongue twisters can facilitate numerous useful applications, including: (1) being used as a pedagogical tool
(Sugiharto et al., 2022; Somoff, 2014; Wilshire, 1999); (2) as a source of humorous entertainment stemming from unintentional mispronunciations;
(3) as a stylistic device for engaging children in reading (e.g. Dr. Seuss stories (Geisel, 1965)); (4)
as a method of designing memorable slogans and tag lines (Guerini et al., 2015); and (5) as stimuli in neuroscience/physiology research (Wong et al., 2019; O'Halloran, 2020; Kember et al., 2017).
Tongue twister generation posits unique challenges compared to other generation tasks. One of the most pertinent features of tongue twisters is the presence of high levels of phonetic overlap across tokens (Wilshire, 1999). Consequently, whilst other types of creative generation may require only *some* output tokens to consider phonetics (such as rhyme or syllable counts), tongue twisters present an extreme version of this problem where the phonetics of almost all generated tokens must be considered. This leads to a very small vocabulary from which to choose semantically relevant words, and presents further challenges with maintaining grammatical validity.
The only work that we are aware of on tongue twister generation at the time of conducting this research is by Keh et al. (2022), who present models that train on graphemes and phonemes, and take either a starting prompt to be continued, or keywords around which to theme an output. They release TT-Corp, a dataset of 644 tongue twisters with parallel non-twister equivalents. We differentiate our work through the release of a dataset that is over 3x larger and which has undergone substantial human quality control. Furthermore, we assess the results of a wider range of popular pre-trained models on this task, including ChatGPT, without explicit injection of phonetic knowledge due to the difficulty in encoding phonetics and the expertise required to utilise phonetic characteristics appropriately. Our experimental results show that most popular pretrained language models
(PLMs) rely on pure word repetition to generate tongue twisters, whilst some (i.e. BART) are able to generate more sophisticated examples. Additionally, very large zero-shot models (i.e. ChatGPT) are able to generate convincing tongue twisters almost on-par with human equivalents.1 To summarise our contributions, we present:
- **TwistList**, a large annotated dataset of humanauthored tongue twisters, containing 2.1K+
examples with human evaluation of their quality.
- **TwisterMisters**, a series of baseline models for tongue twister generation using the most popular state-of-the-art PLMs.
- Extensive automatic and human evaluation to assess the ability of PLMs to implicitly model the complex phonetic phenomena in tongue twisters.
## 2 Related Works
Previous work in phonetically constrained generation has taken one of two approaches: 1) train a generation model on a collection of in-domain texts, or 2) train a generation model on prosaic out-of-domain text, with constraints imposed at decoding time. For example, Lau et al. (2018) collect 3,355 sonnets to produce novel poetry and train models to generate text in iambic pentameter, whilst Xue et al. (2021) train a rap generation model on 272,839 in-domain examples, infusing knowledge of rhythm afterwards. On the other hand, Van de Cruys (2020) train on a subset of CommonCrawl, imposing constraints on topic and 1Our code and resources can be accessed at https://github.com/tangg555/TwistList
| Dataset | Train | Val | Test | Total |
|------------------------------------|---------|-------|--------|---------|
| # Tongue Twisters | 1912 | 106 | 107 | 2128 |
| Vocabulary Size | 9556 | 946 | 880 | 10358 |
| # Total Phonemes | 55 | 43 | 46 | 56 |
| # RAKE Keywords | 3333 | 316 | 288 | 3567 |
| # BERTopic Keywords | 250 | 132 | 160 | 250 |
| Avg. # Input Keywords (RAKE) | 3.16 | 3.32 | 3.01 | 3.16 |
| Avg. # Input Phonemes | 5.57 | 5.83 | 5.16 | 5.56 |
| Avg. Tongue Twister Length (Words) | 15.01 | 16.59 | 13.54 | 15.01 |
| Avg. # Input Phonemes | 26.06 | 28.25 | 23.50 | 26.04 |
rhyme as *a priori* distributions, whilst Tian and Peng
(2022) train a title-to-keyword module on narrative texts in addition to a sonnet generation model trained on news articles and short stories from Reddit. They imposed literary techniques (simile/metaphor) and metre/rhyme constraints at decoding time, owing to the lack of sufficient training data.2
## 3 Tongue Twister Generation 3.1 Task Definition
We formulate the task of tongue twister generation as follows: for a given set of keywords, we aim to generate a tongue twister T, whereby T comprises a sequence of words {w1,w2*,...w*n}. The generated output must satisfy the following constraints: (1) the output should be semantically related to the input keywords; (2) the output should show maximal levels of phonetic overlap across tokens; and (3) the output should be grammatically valid (Wilshire, 1999). Of these requirements, phonetic overlap is the most central to defining text as a "tongue twister".
## 3.2 Twistlist Dataset
Dataset Construction. We present **TwistList**, an annotated dataset of 2.1K+ human-authored tongue twisters for use by the community. The examples contained therein come from a variety of sources available on the web.3 For each tongue twister, phonetic transcription is provided using the *g2p-en* package,4in addition to keywords extracted with RAKE and BERTopic to represent the topic of the tongue twister. Following experimentation with both RAKE and BERTopic, only RAKE keywords are used in training due to human preference and issues regarding the use of BERTopic on short texts (where frequently no keywords are extracted). The main statistics of the dataset are presented in Table 1.
| RAKE: | sells thick socks |
|------------|-------------------------------------------------------------------------------|
| BERTopic: | short shorts socks sock |
| Twister: | Seth at Sainsbury's sells thick socks. |
| Phonetics: | [S EH1 TH] [AE1 T] [S EY1 N S B ER0 IY0 Z] [S EH1 L Z] [TH IH1 K] [S AA1 K S] |
Table 2: Example from TwistList Quality Control. Quality control on our dataset was performed in multiple ways. Firstly, it was ensured that only sufficiently unique tongue twisters were kept in the dataset, as determined by removing examples with over 90% word overlap (rather than keeping variants of the same tongue twister, such as
"Peter Piper picked a pickled pepper" versus "Peter the Piper picked..."). Additionally, non-standard spellings were manually converted to standard US English5 to avoid G2P issues.6 Similarly, tongue-twisters containing obscure vocabulary (such as medicine and dinosaur names) were excluded to further minimise errors. An annotation platform was developed (see Appendix A.1), with which 3 human evaluators, who are native speakers of English, were provided with 100 sampled instances from the dataset to rate the quality of the resulting tongue twisters and the associated extracted keywords. The full dataset contains 2,500+ tongue twisters, of which 2,128 are kept for training/development/testing after filtering examples with insufficient extracted keywords and excessive similarity to existing entries.
To summarise, 3 annotators evaluated the quality of the dataset, where 88% of assessed tongue twisters were considered high quality, and 6% considered
"suitable" (Kappa = 0.321). An example from TwistList is provided in Table 2. As Table 4 shows, the final dataset can be considered high quality, owing to fair/moderate levels of approval and agreement across evaluators. Demographic information of the evaluators can be found in Appendix A.2.
## 3.3 Baseline Models
We present the following baseline models (dubbed TwisterMisters) for the task of tongue twister generation on our TwistList dataset:
Finetuned Baselines. For the finetuned baselines, we chose popular models for language generation, including **GPT-2** (Radford et al., 2019), **DialoGPT**
(Zhang et al., 2020c), T5 (Raffel et al., 2020), and BART (Lewis et al., 2020). These were finetuned with RAKE keywords extracted from human-authored tongue twisters as the input and the tongue twister text from **TwistList** as the target. This was in order to represent our baselines training on in-domain data.
At inference time, the prompt "Generate tongue twisters about the keyword(s): X" is used, where X refers to the input consisting of one or more RAKE keywords extracted from tongue twisters. The full training details are given in Appendix A.3. We also conducted experiments on all aforementioned baselines without finetuning (i.e., a zero-shot setting),
and the results were very poor. Therefore, we did not include these results in the paper.
Training-Free Baseline We additionally provide a TwisterMister baseline that does not require any training. We utilise OpenAI's **ChatGPT**7 with the same prompt as a zero-shot setting for generation.8 Each request to ChatGPT was submitted as part of a separate session, to avoid the effects of extended dialogue influencing outputs. ChatGPT has been utilised in order to set a practical upper-bound of what may be expected from models without explicit phonetic knowledge, owing to its wealth of training data and 175B parameter architecture.9It is assumed that ChatGPT's training data contains tongue twisters, and therefore it is able to abstract away the general patterns of such language in order to provide novel examples (though most likely based on graphemes rather than phonemes).
## 4 Experiments
Automatic Evaluation. We present the results of automatic evaluation on generated outputs and golden examples in Table 3 for the following metrics:
Perplexity (PPL), BLEU (**B-1/B-2**) (Papineni et al.,
2002), ROUGE (**R-1/R-2/R-L**) (Lin, 2004), and BERTScore Precision, Recall, and F-Measure (Zhang 7https://chat.openai.com/chat 8No direct comparison is made to PANCETTA (Keh et al.,
2022) as no code has been publicly released at the time of writing, and essential implementation details are absent from the paper.
9ChatGPT based on GPT-3.5, rather than GPT-4.
| Model | PPL↓ | B-1↑ | B-2↑ | R-1↑ | R-2↑ | R-L↑ | PO↓ | Init-PO↓ | BS-P↑ | BS-R↑ | BS-F↑ |
|----------|--------|--------|--------|--------|--------|--------|-------|------------|---------|---------|---------|
| GPT-2 | 8.40 | 0.007 | 0.003 | 1.301 | 0.123 | 1.315 | 0.022 | 0.020 | 0.690 | 0.810 | 0.744 |
| DialoGPT | 3.83 | 0.038 | 0.025 | 7.724 | 3.610 | 7.640 | 0.069 | 0.089 | 0.754 | 0.831 | 0.790 |
| T5 | 10.16 | 0.057 | 0.038 | 9.701 | 4.573 | 9.574 | 0.689 | 0.727 | 0.795 | 0.818 | 0.806 |
| BART | 1.65 | 0.073 | 0.051 | 11.883 | 6.109 | 10.353 | 0.075 | 0.120 | 0.795 | 0.845 | 0.819 |
| ChatGPT | N/A | 0.200 | 0.137 | 36.765 | 20.659 | 33.437 | 0.093 | 0.157 | 0.888 | 0.894 | 0.883 |
| Choices (%) | Sample Quality | | | |
|-------------------|------------------|------|-------|-------|
| High. | Suitable. | Bad. | Kappa | |
| RAKE keywords | 82.0 | 18.0 | 0.0 | 0.321 |
| BERTopic keywords | 15.0 | 85.0 | 0.0 | 0.445 |
| Tongue Twisters | 88.0 | 6.0 | 4.0 | 0.321 |
et al., 2020b) (**BS-P/BS-R/BS-F**). PPL, BLEU and ROUGE are standard metrics in language generation to assess quality, whilst BERTScore assesses semantic similarity to a gold reference. Additionally, we propose two new metrics, Phonetic Overlap (PO) and Initial Phonetic Overlap (**Init-PO**). PO refers to the average overlap of all phonemes across tokens (\# unique phonemes / \# total phonemes), whereas **Init-PO** is the ratio of unique word-initial phonemes to the number of words (\# unique word-initial phonemes / \# words).
These phonetic metrics reward longer outputs. We argue that, all things equal, a longer tongue twister is better than a shorter one as it provides more entertainment and more opportunities for mispronunciation.
Perfect scores on PO and Init-PO can be achieved by repetition of a single word. Whilst this does not lead to high quality outputs, these metrics are intended exclusively to be indicators of the phonetics, rather than an overall guide to quality. In both cases, higher levels of overlap results in lower ("better") scores, and the highest ("worst") achievable score is 1.
The results in Table 3 show rather clear scaling, with the performance ranking on most metrics (except Perplexity and phoneme overlap) being identical. On the models explicitly finetuned for this task, GPT-2 is shown to be the worst, whilst BART performs the best. We hypothesise that GPT-2's poor performance is in part due to its simple causal language modelling objective alongside its decoder-only architecture
(which is also in DialoGPT). Furthermore, whilst T5 performed well on the automatic metrics, manual inspection revealed that T5 often misinterpreted the task from the prompt, choosing to select its own keywords from the entire prompt, rather than using only the provided keyword list. On the other hand, the training-free zero-shot model, ChatGPT, was shown to perform best on all metrics. This is to be expected as ChatGPT has over 50x more parameters than any other tested PLM, with various pre-training objectives and reinforcement learning, leading to performant zero-shot capabilities. This further demonstrates that PLMs struggle to learn phonetic patterns implicitly from text, especially in English, which has high levels of irregular orthography. Furthermore, with limited data, PLMs struggle to learn the unusual probability distributions underlying tongue twisters, where word choices are intentionally "twisted", obscure, and anti-euphonious. Additionally, due to the wealth of training data seen by ChatGPT, it is likely that many examples have been seen during training.
Human Evaluation. Due to tongue twisters being a creative domain where articulation abilities are tested, we also perform human evaluation. 3 evaluators were asked to rate 100 outputs from the best performing standard baseline (BART), in addition to ChatGPT
outputs and gold examples from **TwistList** on the following criteria: **Relevance** (how relevant the tongue twister is given the keyword inputs), **Fluency**
(how grammatically valid the output is), **Difficulty of** Articulation (how difficult a tongue twister is to say),
Cohesion (how much sense the output makes), and Entertainment Value (how entertaining the output is, considering sounds and semantics). All ratings were on a 5-point Likert scale. Evaluator demographics and training materials are in Appendix A.2.
The mean scores of human evaluation (Table 5)
fall in line with expectations, with *golden* examples performing best on all metrics, and ChatGPT placing second on all but Difficulty of Articulation.10 BART
is able to produce outputs that are deemed to be the
| Score (1 to 5) | Human Evaluation | | |
|----------------------------|--------------------|---------|---------|
| BART | ChatGPT | Golden | |
| Relevance | 4.667∗ | 4.971† | N/A |
| Difficulty of Articulation | 4.143∗ | 4.102∗ | 4.291∗ |
| Fluency | 3.028∗∗ | 4.915∗∗ | 4.938∗∗ |
| Coherence | 3.217∗ | 4.798∗ | 4.909∗ |
| Entertainment Value | 3.269∗ | 4.070∗ | 4.254∗ |
second most difficult to articulate, which we infer may be the result of slight morphological variants of input keywords being used repeatedly, making distinguishing between them during articulation quite challenging (whilst not being able to exploit deeper phonetic relations). The moderate score on Fluency
(3.028) suggests instances of poor grammar may also hinder articulation abilities when expected grammatical structures are not found, leading to an interaction between grammatical validity and articulatory difficulty. Additionally, ChatGPT scoring the lowest for articulatory difficulty may be due to occasionally misunderstanding the requirements of a tongue twister, sometimes producing rhymes or standard prose (see Appendix A.4). However, ChatGPT scores well for Relevance and Fluency, highlighting its capability in producing high-quality coherent language. Perhaps most interestingly, none of the BART score averages on any human evaluation criteria fall below 3 ("neither agree nor disagree"). This performance is therefore quite good for a model finetuned on only 2128 examples, with no additional phonetic knowledge.
| Input | assistant assist | | |
|-----------------------------------------|------------------------------------------------------------------------------------|-----------|-----------|
| GPT-2 | assistant assist assistant assist assistant | | |
| DialogGPT assistant | assistant | assistant | assistant |
| assistant assistant assistant assistant | | | |
| T5 | assistant assist assistant | | |
| BART | A assistant assist is an assistant assist, assistants assist to assist assistants. | | |
| ChatGPT | Assistant ants assist ants in carrying leaves to the ant hill. | | |
| Golden | If I assist a sister-assistant, will the sister's sister-assistant assist me? | | |
## 5 Case Study
Within the example in Table 6, GPT-2 resorts to simply repeating the input, successfully achieving phonetic overlap, but failing to be grammatically valid or particularly sophisticated. This pattern is also demonstrated by DialoGPT and T5. Conversely, BART is able to introduce tokens unseen in the input to create an almost grammatically valid output (the primary mistake being indefinite article agreement, where in the first instance "an" would have been correct, rather than "a"). BART's output is also semantically and logically coherent, with
"A assistant assist is an assistant assist" being valid
(yet redundant), and "assistants assist to assist assistants" also being comprehensible. This example demonstrates why evaluators with high English proficiency and language/linguistics education were selected, as the same word may have different parts of speech, creating outputs that seem grammatically invalid, but do actually follow the rules of English.11 Further investigation is needed to ascertain whether the models are intentionally exploiting this lexical ambiguity, or if human evaluators are demonstrating apophenia, where patterns are found in what is effectively noise (Brugger, 2001). Finally, ChatGPT
utilises morphology to exploit the similarity of the plural noun "assistants" and the phrase "assist ants", and provides a continuation that is in line with the expected behaviour of ants. In comparison to the golden example, ChatGPT's output may be considered more interesting topic-wise, at the expense of not being as phonetically complex ("carrying leaves to the ant hill" contributes heavily to semantics, whilst not being recognisable as part of a tongue twister).
For further analysis, please see Appendix A.4.
## 6 Conclusion
We present work on the topic of tongue twister generation, a form of phonetically-constrained language generation that aims to maximise phonetic overlap, whilst conveying meaningful semantics. We motivate the potential application domains for such generated language, and provide a large annotated dataset of tongue twisters,**TwistList**, to encourage further work. Finally, we present a series of benchmark models alongside automatic/human evaluation to assess generation quality.
## Limitations
Whilst the system presented within this paper is capable of allowing human-in-the-loop contributions (via selecting the input keywords on which to condition the output), it is not able to produce tongue-twisters that take advantage of particular features of speech sounds such as place and manner of articulation, in order to create more advanced outputs that exploit phonetic relatedness (rather than exact matches). The same can be said of our proposed metrics, PO and Init-PO, which do not account for phonetic similarity across sounds that share manner/place of articulation
(e.g. "she sells sea shells"). Additionally, whilst commonly known tongue twisters may follow a particular format (e.g. rhyme schemes), such schemes and templates have not been enforced here. We also do not demonstrate the capabilities of these systems if they were trained on phonetic transcriptions explicitly, as we only aim to assess their performance when training on graphemes in standard orthography.
## Ethics Statement
All use of human participants in this study has been approved by the Ethics Board of the primary author's institution, including the disclosure of demographic information. Regarding the generation of tongue twisters, language generation is a necessarily creative domain that has the ability to reproduce content that some individuals may find offensive. Care was taken to check outputs in the human evaluation set for any such materials, and if they had been produced, they would have been removed from the evaluation set. Additionally, no egregiously offensive material has been provided in the TwistList dataset. However, the distinction between offensive and humorous content is a highly complex topic, and therefore some examples within the dataset may not be suitable for all individuals (e.g. suggestive content and swearing, such as "I'm not the pheasant plucker, I'm the pheasant plucker's son", and the clear relation to common expletives).
## Acknowledgements
Tyler Loakman is supported by the Centre for Doctoral Training in Speech and Language Technologies (SLT)
and their Applications funded by UK Research and Innovation [grant number EP/S023062/1]. Chen Tang is supported by the China Scholarship Council (CSC) for his doctoral study (File No.202006120039).
## References
Rajat Agarwal and Katharina Kann. 2020. Acrostic poem generation. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 1230–1240, Online. Association for Computational Linguistics.
Peter Brugger. 2001. From haunted brain to haunted science: A cognitive neuroscience view of paranormal and pseudoscientific thought. In James Hournan and RenseEditors Lange, editors, Hauntings and Poltergeists: Multidisciplinary Perspectives, page 195–213. McFarland.
Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*, 76(5):378.
Theodore Seuss Geisel. 1965. *Fox in socks: Dr. Seuss's* book of tongue tanglers. Random House.
Marco Guerini, Gözde Özbal, and Carlo Strapparava.
2015. Echoes of persuasion: The effect of euphony in persuasive communication. In *Proceedings of the* 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1483–1493, Denver, Colorado. Association for Computational Linguistics.
He He, Nanyun Peng, and Percy Liang. 2019. Pun generation with surprise. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1734–1744, Minneapolis, Minnesota.
Association for Computational Linguistics.
Henglin Huang, Chen Tang, Tyler Loakman, Frank Guerin, and Chenghua Lin. 2022. Improving Chinese story generation via awareness of syntactic dependencies and semantics. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers).
Sedrick Scott Keh, Steven Y. Feng, Varun Gangal, Malihe Alikhani, and Eduard Hovy. 2022. Pancetta: Phoneme aware neural completion to elicit tongue twisters automatically.
Heather Kember, Kathryn Connaghan, and Rupal Patel.
2017. Inducing speech errors in dysarthria using tongue twisters. *International journal of language &*
communication disorders, 52(4):469–478.
Jey Han Lau, Trevor Cohn, Timothy Baldwin, Julian Brooke, and Adam Hammond. 2018. Deep-speare: A
joint neural model of poetic language, meter and rhyme.
In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1:
Long Papers), pages 1948–1958, Melbourne, Australia.
Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Ken D. O'Halloran. 2020. A tongue-twister to translation?
increased complexity of genioglossus movement during wakefulness in persons with obstructive sleep apnoea.
The Journal of Physiology, 598(3):435–436.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the 40th* Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer.
Journal of Machine Learning Research, 21(140):1–67.
Victoria Somoff. 2014. Four is not fourteen: Tongue twister patterns and the unmastery of language. Western Folklore, 73(2/3):195–215.
Prasetyawan Sugiharto, Yan Santoso, and Maila Shofyana.
2022. Teaching english pronunciation using tongue twister. *Acitya: Journal of Teaching and Education*,
4(1):189–197.
Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, and Nanyun Peng. 2022. Context-situated pun generation.
In *EMNLP 2022*.
Chen Tang, Chenghua Lin, Henglin Huang, Frank Guerin, and Zhihao Zhang. 2022a. EtriCA: Eventtriggered context-aware story generation augmented by cross attention. In *Findings of the Association for* Computational Linguistics: EMNLP 2022.
Chen Tang, Hongbo Zhang, Tyler Loakman, Chenghua Lin, and Frank Guerin. 2022b. Terminology-aware medical dialogue generation. *arXiv preprint arXiv:2210.15551*.
Chen Tang, Zhihao Zhang, Tyler Loakman, Chenghua Lin, and Frank Guerin. 2022c. NGEP: A graph-based event planning framework for story generation. In Proceedings of AACL-IJCNLP, Online.
Yufei Tian and Nanyun Peng. 2022. Zero-shot sonnet generation with discourse-level planning and aesthetics features. In *Proceedings of the 2022 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3587–3597, Seattle, United States.
Association for Computational Linguistics.
Tim Van de Cruys. 2020. Automatic poetry generation from prosaic text. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 2471–2480, Online. Association for Computational Linguistics.
Carolyn E. Wilshire. 1999. The "tongue twister" paradigm as a technique for studying phonological encoding.
Language and Speech, 42(1):57–82.
Jörg Wöckener, Thomas Haider, Tristan Miller, The-Khang Nguyen, Thanh Tung Linh Nguyen, Minh Vu Pham, Jonas Belouadi, and Steffen Eger. 2021. End-to-end style-conditioned poetry generation: What does it take to learn from examples alone? In *Proceedings of the 5th* Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 57–66, Punta Cana, Dominican Republic (online). Association for Computational Linguistics.
Min Ney Wong, Yanky Chan, Manwa L. Ng, and Frank F.
Zhu. 2019. Effects of transcranial direct current stimulation over the broca's area on tongue twister production. International Journal of Speech-Language Pathology, 21(2):182–188. PMID: 29642741.
Lanqing Xue, Kaitao Song, Duocai Wu, Xu Tan, Nevin L.
Zhang, Tao Qin, Wei-Qiang Zhang, and Tie-Yan Liu.
2021. DeepRapper: Neural rap generation with rhyme and rhythm modeling. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 69–81, Online. Association for Computational Linguistics.
Zhiwei Yu, Jiwei Tan, and Xiaojun Wan. 2018. A neural approach to pun generation. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1650–1660, Melbourne, Australia. Association for Computational Linguistics.
Rongsheng Zhang, Xiaoxi Mao, Le Li, Lin Jiang, Lin Chen, Zhiwei Hu, Yadong Xi, Changjie Fan, and Minlie Huang. 2020a. Youling: an AI-assisted lyrics creation system. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 85–91, Online. Association for Computational Linguistics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020b. Bertscore:
Evaluating text generation with bert. In *International* Conference on Learning Representations.
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020c. DIALOGPT : Large-scale generative pre-training for conversational response generation. In *Proceedings of the 58th Annual Meeting* of the Association for Computational Linguistics:
System Demonstrations, pages 270–278, Online.
Association for Computational Linguistics.
## A Appendices A.1 Dataset Quality Control
An annotation platform was developed as shown in
(Figure 2).
## A.2 Human Participants
Due to tongue twisters being highly reliant on articulation abilities, the demographics of the human participants used within this work are highly important. Additionally, tongue twisters are also a form of humour and entertainment, where individual perceptions of what may or may not be considered humorous or entertaining differ according to numerous factors. In an effort to remain as transparent as possible, and follow best practices for human evaluation, relevant demographic information of participants are outlined below (with the necessary requisite permission and ethical approval).
Dataset Evaluation All evaluators involved in the quality control process of the **TwistList** dataset are native speakers of English, and either have or are working towards University level qualifications.
Additionally, 2 of the 3 evaluators have extensive education in linguistics or modern languages. No monetary incentive was provided.
Generation Evaluation All evaluators involved in the evaluation of the quality of generated tongue twisters are native speakers of English, and either hold or are working towards University level qualifications in Linguistics, Modern Languages or NLP.
Additionally, all evaluators cited the United Kingdom as their country of socialisation, and no participants reported language processing difficulties that could affect results. No monetary incentive was provided.
Materials Provided to Human Participants Additionally, all evaluators for both the dataset and generation outputs were presented with calibration examples to demonstrate the sort of outputs that would be presented, and the logic behind particular scores, in order to minimise individual interpretations of the scoring criteria. All evaluation was performed on a custom made online annotation platform (Figure 3).
## A.3 Training Details
All pre-trained models used (naturally excluding ChatGPT) are based on publicly available checkpoints from Hugging Face.12 Models are trained for up to 5 epochs on a Tesla A5000 machine with the best checkpoints selected based on the validation loss. The batch size is set to 32, and the learning rate is 8e−5, with the Adam optimiser selected for training. To help the loss curve converge on our small few-shot dataset, we limit the generation length to 100 (covering all test tongue twisters). Meanwhile, the source length is limited to 150. The training and testing steps are set up with the implementation of the PyTorch Lightning13 framework to guarantee the reliability of the experiment. All language models are fairly trained and tested with the same steps.
## A.4 Further Qualitative Comments
Whilst the pattern of extreme word repetition is seen in many of the finetuned models (often with the exception of BART, which is demonstrated to be capable of producing slightly more sophisticated outputs), overall assessment of the tongue twisters produced at inference time reveals interesting patterns, particularly in regard to ChatGPT outputs. Firstly, the limits of ChatGPT are made apparent in a few examples such as the input "silver shiny ship sank" generating "How much wood would a woodchuck chuck if a woodchuck could chuck silver shiny ships?", a clear derivation of a famous woodchuck related tongue twister that it is rather safe to assume appears multiple times in ChatGPTs training material. Additionally, comments from evaluators also reveal that ChatGPT's output is often considered more of a rhyme or general literary text, rather than specifically a tongue twister. However, examples such as these are also found in the humanauthored golden examples, demonstrating that there is no steadfast consistent opinion as to what constitutes a
(good) tongue twister. Likewise, some examples may contain large amounts of sound repetition, but not in a way that necessarily presents articulatory difficulty.
## A.5 Future Works
In this paper, we mainly analyse the performance of large-scale pretrained language models (PLMs)
on Tongue Twister Generation, and propose a corresponding dataset for further investigation. In further works, we aim to propose novel models which can better leverage phonetic symbols. There 12https://huggingface.co/models 13https://www.pytorchlightning.ai/
![8_image_0.png](8_image_0.png)
are numerous existing works (Huang et al., 2022 ;
Tang et al., 2022a,b) that provide approaches for injecting such knowledge into PLMs. However, the phonetic features differ from these text-format knowledge items, as phonemes are hard to encode with input text tokens when feeding into PLM
encoders. Another promising approach is to explicitly model the phonetic features into text sequences (Tang et al., 2022c ), though there is no observed method for transforming phonetic notation. We intend to perform further research based on these existing approaches.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Yes, in the required Limitations section as well as Section 4 (concerning our proposed metrics)
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract (all) and contribution summary at the end of the introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Twistlist Dataset (Section 3.2)
✓ B1. Did you cite the creators of artifacts you used?
Sources of all entries in the dataset are credited in the .json file for each entry.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We did not discuss the licensing around our dataset. The dataset uses works that are freely available on the web and come from various sources such as websites, blogs, and ebooks. Many of these cases are Public Domain, and for those that are not, we believe we are in accordance with Fair Use, as the dataset does not encroach on the use case of the original works (no graphic design/other elements are maintained) and the dataset is for use as a research tool only. We will also reply promptly to any cases of copyright infringement that relevant copyright holders make us aware of.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
See answer to B2.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
See the Ethics Statement regarding the potential for tongue twisters to be offensive. Additionally, all tongue twisters are believed to be about fictional characters, rather than individuals.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Such details are not explicitly stated. However, it can be easily ascertained from the paper that the tongue twisters we focus on are entirely in English (and the range of domains the tongue twisters were taken from can be seen in the "source" entry for each example).
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
See Table 1 for dataset statistics.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4 (Page 3)
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.3
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Tables 3/5. Scores are the mean, as is standard.
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Exact details of evaluation implementations (except Phonetic Overlap) were not detailed. This is in part due to these metrics (BLEU/ROUGE/BERTScore) not being very reliable for creative language generation, and therefore the exact values from different implementations are not likely to be of use.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3.2 And Section 4. In Addition To Appendix A.2
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Screenshots of the annotation platforms can be found in Figures 2 and 3 in the Appendix
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
We declared that no monetary incentive was given to participants. We did not specify the recruitment process, but due to participants all holding or working towards university level qualifications, it can be inferred that they are colleagues.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
This information was not deemed necessary in the submitted paper (due to the limited risk of the data we were working with). However, it is stated in the Ethical Statement and Appendix A.2 that all shared information about human demographics was collected with the necessary permissions and approval.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Ethical approval was gained for human evaluation of the dataset and generated outputs from the relevant institution
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
We provide demographic information for human participants in Appendix A.2 |
card-2023-substitution | Substitution-based Semantic Change Detection using Contextual Embeddings | https://aclanthology.org/2023.acl-short.52 | Measuring semantic change has thus far remained a task where methods using contextual embeddings have struggled to improve upon simpler techniques relying only on static word vectors. Moreover, many of the previously proposed approaches suffer from downsides related to scalability and ease of interpretation. We present a simplified approach to measuring semantic change using contextual embeddings, relying only on the most probable substitutes for masked terms. Not only is this approach directly interpretable, it is also far more efficient in terms of storage, achieves superior average performance across the most frequently cited datasets for this task, and allows for more nuanced investigation of change than is possible with static word vectors. | # Substitution-Based Semantic Change Detection Using Contextual Embeddings
Dallas Card University of Michigan School of Information, Ann Arbor, MI
[email protected]
## Abstract
Measuring semantic change has thus far remained a task where methods using contextual embeddings have struggled to improve upon simpler techniques relying only on static word vectors. Moreover, many of the previously proposed approaches suffer from downsides related to scalability and ease of interpretation.
We present a simplified approach to measuring semantic change using contextual embeddings, relying only on the most probable substitutes for masked terms. Not only is this approach directly interpretable, it is also far more efficient in terms of storage, achieves superior average performance across the most frequently cited datasets for this task, and allows for more nuanced investigation of change than is possible with static word vectors.
## 1 Introduction
Measuring semantic change is one of the few areas of NLP where contextual embeddings have not yet led to a definitive improvement over previous methods. In particular, the commonly used approach of aligning static embeddings trained on different time periods (Hamilton et al., 2016b) continues to be a surprisingly hard to beat baseline.
Given that contextual embeddings provide a representation for each occurrence of a word in context, they would seem to be ideally suited to a more nuanced investigation of semantic change. Most attempts to leverage them for this purpose, however, produce quantitatively worse results, while being less interpretable and requiring more resources.
Here, we present a simplified and improved approach to scalable, interpretable, semantic change detection using contextual embeddings. Inspired by Eyal et al. (2022), we work only with the most probable replacements for masked words, and measure semantic change in terms of the distributions of replacements in each time period. Not only does this better match human judgements, it is highly space 590 efficient, works seamlessly for out-of-vocabulary words, and helps intuitively characterize meaning change and variation.
## 2 Background
Measuring semantic change involves a set of tasks related to determining if and how a term's meaning has changed over time. Here, we focus on the task of measuring the amount of change that has occurred from one time period to another (Gulordava and Baroni, 2011; Schlechtweg et al., 2020).1 Existing approaches to this task are mostly of two types. The first is associating each term with a single vector per time period and measuring the distance between vectors, of which we take Hamilton et al. (2016b) to be representative. As a variation on this, several authors have proposed averaging the output of contextual embedding models to get a single vector per term in each time period, but this has generally not led to an improvement over using static vectors (Martinc et al., 2020a; Kurtyigit et al.,
2021; Liu et al., 2021). A related approach is to represent words in terms of their nearest neighbors using static word vectors (Hamilton et al., 2016a; Gonen et al., 2020), but this does not show a clear improvement over other static embedding methods
(Montariol et al., 2021).
A second type of approach begins with various methods for word sense induction, then measures change in terms of the relative prevalence of a term's different senses (Frermann and Lapata, 2016; Hu et al., 2019; Arefyev and Zhikov, 2020; Arefyev and Bykov, 2021). In some cases, authors simply cluster contextual representations for each term, and measure differences in the distributions of clusters between two time periods, rather than dealing with explicit word senses (Giulianelli et al., 2020; Martinc et al., 2020b; Montariol et al., 2021).
1For surveys of computational approaches to lexical semantic change detection, see Kutuzov et al. (2018), Tang (2018),
and Tahmasebi et al. (2021).
Despite the additional information provided by contextual embedding models, methods using type embeddings (as opposed to token), continue to be competitive. For example, on the recent SemEval multilingual semantic change detection task, none of the top four systems used token embeddings
(Schlechtweg et al., 2020). Methods using contextual embeddings have done better on some more recent mono-lingual shared tasks (Kutuzov and Pivovarova, 2021; Zamora-Reina et al., 2022), but have not yet been evaluated with a consistent setup across multiple languages.
## 3 Methods
Building on Eyal et al. (2022), we represent each token in the corpus (or a sufficiently large sample of them) by a small set of probable replacement terms from a contextual embedding model. However, whereas Eyal et al. (2022) did this for the purpose of word sense disambiguation, we do so for the purpose of measuring semantic change.
For each sampled occurrence of each term, we mask the term of interest, feed the masked context through a model, and obtain the predicted token probabilities corresponding to the mask token.2 From these, we save only the top-k most probable words (excluding stopwords and partial word pieces), and discard the rest.
For a given term in a particular time period, we then count how many times each word in the model vocabulary has appeared as a top-k replacement for that term, and normalize this by its sum, giving us a distribution over replacements. To obtain a raw score of semantic change between two time periods, we compute the Jensen-Shannon Divergence
(JSD) between the two distributions representing the same term in different time periods. However, as we show below, the raw JSD scores are strongly correlated with term frequency. Thus, to obtain a scaled metric, we convert the raw JSD scores into a quantile, comparing the raw score for a term of interest to other terms with similar frequency.
Compared to saving the full output vector per token, this approach only requires a miniscule amount of storage per token, and thus does not require the kind of heuristic dropping of tokens employed by Montariol et al. (2021). In addition, the dominant meanings of a word in each context can be summarized by the terms which occur most fre-2Words that get tokenized into multiple word pieces are replaced by a single mask token.
quently among the top-k replacements. Although such replacements are limited to the terms which exist in the model vocabulary, in practice this is sufficient to represent a nuanced set of meanings, and works even for words which get tokenized into multiple word pieces, as we show below.
More formally, given two corpora C1 and C2, let the count of token v as a top-k replacement for term t in corpus c be:
$$\operatorname{count}(v,t,c)=\Sigma_{i=1}^{N_{c}(t)}\mathbb{I}[v\in R(t,i,k)],$$
where R(*t, i, k*) is the set of top-k most probable replacements for occurrence i of term t (excluding stopwords and partial word pieces in the model vocabulary), and Nc(t) is the number of sampled occurrence of term t in corpus c.
3 Let ∆c t by the distribution of top-k replacement counts for term t in corpus c, obtained by dividing the corresponding vector of counts (i.e.,
[count(·*, t, c*)]) by the sum over the model vocabulary. The raw change score for term t is given by the JSD between the two distributions:
$$\mathrm{raw}(t)=\mathrm{JSD}\left(\Delta_{t}^{C1},\Delta_{t}^{C2}\right).$$
t. (2)
Finally, we correct for frequency effects by rescaling the raw JSD scores against the scores for terms with similar frequency as the target term, giving us a quantile scaled in [0, 1]:
$$\operatorname{scaled}(t)=\Sigma_{s\in T(t)}\mathbb{I}[\operatorname{raw}(t)\geq\operatorname{raw}(s)]/|T(t)|,$$
(3)
where T(t) is the set of terms with similar frequency to term t (excluding term t itself). More specifically, we compare against all terms within a fixed factor of the target frequency:
$$T(t)=\{s:\mbox{fr}(t)/F\leq\mbox{fr}(s)\leq\mbox{fr}(t)\times F,s\neq t\},\tag{4}$$
where fr(t) is the frequency of term t in the corpus,
with window factor F.
## 4 Experiments
To evaluate our method we make use of datasets for which there have been prior evaluations of methods across multiple languages, following standards established by past work for the sake of a head-tohead comparison.4 3Unlike Eyal et al. (2022), we do not combine probabilities for different forms of the same lemmas in the model vocabulary. In addition, we do not exclude the target term from the top-k replacements, except implicitly for terms which get split into multiple word pieces.
4Code to replicate these experiments is available at https://github.com/dallascard/SBSCD
## 4.1 Data
We use five datasets with words labeled in terms of semantic change between two time periods. Four of these are from SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection (SE;
Schlechtweg et al., 2020). These datasets contain 31 to 48 terms from four languages, graded in terms of change by human raters, along with accompanying corpora to be used in estimating the amount of change. The fifth dataset (GEMS) comes from Gulordava and Baroni (2011), and contains 100 words labeled in terms of semantic change from the 1960s to 1990s. As with most recent papers which use this dataset, we use the Corpus of Historical American English (COHA; Davies, 2010)
for measuring change in the GEMS words.
## 4.2 Experimental Details
For each dataset, we fine tune an appropriate BERT
model to the union of the two associated unlabeled corpora using continued masked language model training with the HuggingFace transformers package. We then index the corpora to find all occurrences of each word. For all target words, along with a random set of 10,000 background terms, we randomly sample up to 4,000 occurrences of each from the associated corpora. We process all sampled tokens as described above to obtain and store the top-k replacements for each, with k = 5. Using the replacements obtained from the model, we compute raw JSD scores for each term. Finally, we convert these to scaled scores by comparing to the background terms that have frequency within a factor of two of the target term (i.e., F = 2).
Following past work, we evaluate using Spearman correlation with human ratings, comparing against the best results from recent papers. In particular, we include two results based on slight variations on Hamilton et al. (2016b), one of which was the best performing method in the SemEval competition (Pömsl and Lyapin, 2020), as well as methods using contextual embeddings (Martinc et al.,
2020b; Montariol et al., 2021). For fully experimental details, please refer to Appendix A.
## 4.3 Results
Full results are given in Table 1. Although our method is not uniformly better than all previous methods on all dataset, it does produce the best result on average, as well as improvements on GEMS,
SE English and SE Latin.
![2_image_0.png](2_image_0.png)
As an example to better understand these results, the raw JSD scores from our method are shown in Figure 1 (top) for the SE English data, with select terms labeled. As can be seen, there is a strong relationship between term frequency and raw JSD,
hence the need to rescale the raw scores relative to terms with similar frequency. After rescaling, we see a strong correlation between our final semantic change scores and the human ratings, as shown in Figure 1 (bottom) for the SE English data.
As with the approach of Hamilton et al. (2016b),
our method supports direct interpretation of semantic change. To understand the change in a word's typical usage, we can look at the overall most common replacements from each time period. Table 2 shows the scores and rankings of several selected terms from SE English, along with the most common substitutes from each time period.
Looking at the results, we can see, for example, strong agreement with human annotators on a dramatic change in the meaning of *plane* (comparing 1810–1860 vs. 1960–2010), from the geometric concept to the flying machine. On the other hand, our results suggest that human raters may have slightly underestimated the amount of change in
| GEMS | SE Eng | SE Ger | SE Lat | SE Swe | Average | Average (weighted) | |
|-----------------------------------------------------|----------|----------|----------|----------|-----------|----------------------|-------|
| Number of words | 96∗ | 37 | 40 | 48 | 31 | | |
| Static Embedding Methods Pömsl and Lyapin (2020) | - | 0.422 | 0.725 | 0.412 | 0.547 | - | - |
| Montariol et al. (2021) [static] | 0.347 | 0.321 | 0.712 | 0.372 | 0.631 | 0.477 | 0.452 |
| Contextual Embedding Methods Martinc et al. (2020b) | 0.510 | 0.313 | 0.436 | 0.467 | -0.026 | 0.340 | 0.394 |
| Montariol et al. (2021) [contextual] | 0.352 | 0.437 | 0.561 | 0.488 | 0.321 | 0.432 | 0.422 |
| Scaled JSD | 0.535 | 0.547 | 0.563 | 0.533 | 0.310 | 0.498 | 0.514 |
Word SE
rating
SE
rank
Scaled
JSD
Scaled
JSD rank
Corpus A substitutes (1810–1860) Corpus B substitutes (1960–2010)
plane 0.88 1 0.97 1 plane line planes point surface lines plane aircraft planes jet airplane car
graft 0.55 4 0.97 2 tree plant stock vine fruit wood corruption bribery fraud crime violence tip 0.68 2 0.85 7 tipped tip covered end filled tips give tip tips end tipped edge point top ends
gas 0.16 23 0.72 14 gas gases vapor air fire water gas gasoline oil gases fuel water air
head 0.30 10 0.68 16 head face hand heads hands eyes head face heads hand body hands eyes
bit 0.31 9 0.51 23 bit piece sort little pieces bits kind bit little lot touch tad piece bits pieces
fiction 0.02 35 0.41 27 fiction history literature art poetry fiction fact fantasy story stories novels tree 0.07 33 0.22 33 trees tree plants branches plant wood trees tree plants woods branches bushes
ounce 0.28 11 0.08 37 ounce inch pounds hour acre dollars ounce pounds inch inches cups pieces
Table 2: Example terms from the SE English dataset, showing the most common substitutes from our approach.
the meaning of *graft*, which was previously used mostly in reference to vegetation, but now most commonly refers to corruption.5 By contrast, *ounce* may be a case where our method has underestimated the change that has taken place. Older usages seem to map more generically to a wider range of quantities (hence the appearance among the early substitutes of hour, *acre*,
and *dollars*), whereas modern usage seems more restricted. Indeed, we do find some difference in the distribution of substitutes between the two time periods, but less of a difference than is typical for words with similar frequency, hence the low final score from our method (see Figure 1).
Although we do not emphasize it in this paper, of our method can easily be combined with the approach of Eyal et al. (2022) to further investigate meaning changes, by inferring senses from the term replacements, and looking at how their usage varies by time period. In particular, for each target term, we can construct a graph from the set of term substitutes (as nodes), where edge weights represent the number of top-k clusters in which two substitutes co-occur. Following Eyal et al. (2022),
we experiment with Louvain community detection to identify sense clusters from these graphs for each term of interest, and use Jaccard similarity to associate each mention with a sense cluster, based on substitute overlap (see Appendix A for details).
Inspecting the distribution of these senses over time helps to distinguish the gradual adoption of existing senses from the creation of new ones. For example, the most common sense of *plane* is captured by the sense cluster {aircraft, jet, *airplane*,
car}, and as expected, this sense is not found in the 1810–1860 English data, except for two instances which appear to be errors in the inferred sense. By contrast, the second most common sense—{planes, line, point, *surface*}—appears in both time periods, but is much more common in the earlier time.
This approach also provides more insight into how the meaning of *graft* has changed. The most common sense cluster is the horticultural meaning
{tree, plant, stock, *vine*}, and this meaning occurs in both time periods, but is much more common in the earlier one. A second cluster, corresponding to illicit activity—{corruption, violence, *bribery*,
fraud}—occurs only in the later time period. This clustering method also surfaces a third sense with a medical meaning—{transplant, surgery, *disease*,
drug}—which is not revealed by the top few overall most common replacements given in Table 2.
## 5 Discussion And Related Work
As noted by others, new and larger datasets for rigorously evaluating semantic change are badly needed (Tahmasebi et al., 2021). Existing datasets are relatively small, and are mostly based on inspecting a limited number of examples per term.
Unfortunately, determining ground truth for semantic change is challenging, and producing such resources is costly. Ideally, future datasets for evaluation should be larger, both to allow for more robust evaluation, and to have sufficient targets for both hyperparameter tuning and evaluation.
In addition to the dataset we have used in this paper, two others are available from shared tasks on Spanish and Russian, respectively (Kutuzov and Pivovarova, 2021; Zamora-Reina et al., 2022). Both of these are comparable in size to the GEMS
dataset used here. Unfortunately, they are less useful for evaluation because most submissions to these shared tasks only evaluated on the task data, and not on other datasets. As shown by the replication of Martinc et al. (2020b) in Montariol et al. (2021), a method can sometimes perform well on one language but fail to generalize to others. As such, we have based our evaluation on datasets for which there has been a consistent evaluation of methods across multiple languages. As future work, a careful replication study of all methods from each competition on all available datasets, including an assessment of sensitivity to hyperparameters, would be highly informative.
Besides Eyal et al. (2022), The closest prior work to ours is Kudisov and Arefyev (2022), who use dynamic patterns to generate many variations on example usages sampled from the given corpora.
These variations are then used to generate hundreds of replacement terms from a masked language model with associated probabilities. These probabilities are averaged (heuristically combining replacements with differing numbers of word pieces) to obtain a mean vector for each sampled instance. Finally, semantic change is computed as the average cosine distance between all pairs of vectors across corpora. This method was evaluated as part of the LSCDiscovery shared task on Spanish (Zamora-Reina et al., 2022). Preliminary work on this method was described in Arefyev and Bykov (2021), where a slightly different version of it was evaluated on the RuShiftEval shared task on Russian (Kutuzov and Pivovarova, 2021).
Compared to Kudisov and Arefyev (2022), our approach is considerably simpler, and better suited to storing representations of a complete corpus for subsequent analysis and exploration. In particular, we only consider a small number of substitutes for each example (storing only the top-k most probable terms, without the associated probabilities).
We do not use dynamic patterns, and only consider terms in the model vocabulary as potential substitutes. We also associate each term with a single distribution over the model vocabulary per time period (not per mention), and use Jensen-Shannon divergence to more naturally measure the distance between distributions. Importantly, we also correct for frequency effects, as described above.
Although our approach avoids the onerous storage requirements of methods which save full contextual vectors, it still requires considerable processing time to obtain the top-k replacements for all tokens. Future work could explore smaller or more efficient models for this purpose.6 Finally, despite its simplicity, measuring the cosine distance between aligned static vectors remains a strong and efficient baseline (Hamilton et al., 2016b). More work is needed to determine where contextual embeddings can offer sufficient advantage in measuring semantic change to justify their greater computational cost.
Compared to static embeddings, our approach is weakest on the German and Swedish datasets, which could relate to the quality of the pretrained models that are available for those languages, the data used for pretraining, or perhaps issues that arise in tokenization of the reference corpora. For a tentative exploration of some possible factors, please refer to Appendix C.
## 6 Conclusion
We have presented a simplified and improved approach to measuring semantic change using contextual embeddings, based on the Jensen-Shannon Divergence between the distributions of the most probable replacements for masked tokens in different time periods, corrected for frequency effects.
This approach achieves superior performance on average, while remaining directly interpretable, with vastly reduced storage requirements.
6See Appendix B for results using various model sizes.
## Limitations
There are several limitations to this work which should be kept in mind. First and foremost, the datasets for evaluating the measurement of semantic change are relatively small, meaning that any estimates of correlation with human judgements will be relatively high variance. In addition, although the SemEval data includes text from four languages, there is no guarantee that these methods will work as well as they do on other languages or other time periods. Moreover, our approach depends on the use of pretrained language models, and the quality (or existence) of these and other relevant resources will vary by language.
In addition, like all methods, our approach involves numerous small choices, such as the number of background terms to sample, the number of samples taken, and the value of k in choosing top substitutes. We have kept our choices for these consistent across all five datasets, and these values have not been tuned. As such, different choices could result in better or worse correlation with human judgements. It is also worth noting that the human judgements collected by the creators of these datasets may involve errors or noise. It is possible that a different sample of data, or having different people evaluate the same data, would produce different judgements.
For exploring the variation in word meanings, we have used the approach of Eyal et al. (2022) directly, with the only differences being that we mask terms of interest (allowing us to work with terms that do not exist in the model vocabulary), and do not combine multiple forms of lemmas when getting the top-k terms. We adopt this approach because it is especially easy to combine with our own work, but different methods for word sense induction might lead to different conclusions about the different meanings of a term that existed in any particular time period. In addition, any conclusions drawn are necessarily limited to the corpora that are used, most of which will be a highly biased sample of all text that was produced by all people for any given period of time.
## Ethical Considerations
This work only uses well established datasets for the purposes for which they were designed (studying changes in languages and evaluating measurement of semantic change), thus poses few ethical concerns that did not already exist for these data.
Nevertheless, it is worth emphasizing that all of methods discussed in this paper only return, at best, a noisy estimate of semantic change. Words are used differently by different people, and attempts to measure changes in language inevitably simplify the diversity of uses into a single number, which discards a great deal of nuance. As such, any work applying these methods to measure semantic change should be aware of their limitations and proceed carefully.
## Acknowledgements
Many thanks to Kaitlyn Zhou and anonymous reviewers for helpful comments and suggestions.
## References
Nikolay Arefyev and Vasily Zhikov. 2020. BOS at SemEval-2020 task 1: Word sense induction via lexical substitution for lexical semantic change detection.
In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*.
Nikolay V. Arefyev and D. A. Bykov. 2021. An interpretable approach to lexical semantic change detection with lexical substitution. In Proceedings of the International Conference on Computational Linguistics and Intellectual Technologies (Dialogue).
Mark Davies. 2010. The corpus of historical American English (COHA). Available online at https://www.english-corpora.org/coha/.
Matan Eyal, Shoval Sadde, Hillel Taub-Tabib, and Yoav Goldberg. 2022. Large scale substitution-based word sense induction. In *Proceedings of ACL*.
Lea Frermann and Mirella Lapata. 2016. A Bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics.
Mario Giulianelli, Marco Del Tredici, and Raquel Fernández. 2020. Analysing lexical semantic change with contextualised word representations. In *Proceedings of ACL*.
Hila Gonen, Ganesh Jawahar, Djamé Seddah, and Yoav Goldberg. 2020. Simple, interpretable and stable method for detecting words with usage change across corpora. In *Proceedings of ACL*.
Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the Google Books ngram corpus.
In *Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics*.
William L. Hamilton, Jure Leskovec, and Dan Jurafsky.
2016a. Cultural shift or linguistic drift? Comparing two computational measures of semantic change. In Proceedings of EMNLP.
William L. Hamilton, Jure Leskovec, and Dan Jurafsky.
2016b. Diachronic word embeddings reveal statistical laws of semantic change. In *Proceedings of* ACL.
Renfen Hu, Shen Li, and Shichen Liang. 2019. Diachronic sense modeling with deep contextualized word embeddings: An ecological view. In *Proceedings of ACL*.
Artem Kudisov and Nikolay Arefyev. 2022. BOS at LSCDiscovery: Lexical substitution for interpretable lexical semantic change detection. In Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change.
Sinan Kurtyigit, Maike Park, Dominik Schlechtweg, Jonas Kuhn, and Sabine Schulte im Walde. 2021.
Lexical semantic change discovery. In Proceedings of ACL.
Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: A survey. In Proceedings of the International Conference on Computational Linguistics.
Andrey Kutuzov and Lidia Pivovarova. 2021. RuShiftEval: A shared task on semantic shift detection for Russian. In Proceedings of the International Conference on Computational Linguistics and Intellectual Technologies (Dialogue).
Yang Liu, Alan Medlar, and Dorota Glowacka. 2021.
Statistically significant detection of semantic shifts using contextual word embeddings. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems.
Matej Martinc, Petra Kralj Novak, and Senja Pollak.
2020a. Leveraging contextual embeddings for detecting diachronic semantic shift. In *Proceedings* of the Twelfth Language Resources and Evaluation Conference.
Matej Martinc, Syrielle Montariol, Elaine Zosa, and Lidia Pivovarova. 2020b. Capturing evolution in word usage: Just add more clusters? In *Proceedings* of the Web Conference 2020.
Syrielle Montariol, Matej Martinc, and Lidia Pivovarova.
2021. Scalable and interpretable semantic change detection. In *Proceedings of NAACL*.
Martin Pömsl and Roman Lyapin. 2020. CIRCE at SemEval-2020 task 1: Ensembling context-free and context-dependent word representations. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*.
Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi.
2020. SemEval-2020 task 1: Unsupervised lexical semantic change detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation.
Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2021.
Survey of computational approaches to lexical semantic change detection. In Nina Tahmasebi, Lars Borin, Adam Jatowt, Yang Xu, and Simon Hengchen, editors, *Computational approaches to semantic change*,
chapter 1, pages 1–91. Language Science Press.
Xuri Tang. 2018. A state-of-the-art of semantic change computation. *Natural Language Engineering*, 24(5):649–676.
Frank D. Zamora-Reina, Felipe Bravo-Marquez, and Dominik Schlechtweg. 2022. LSCDiscovery: A
shared task on semantic change discovery and detection in Spanish. In *Proceedings of the 3rd Workshop on Computational Approaches to Historical* Language Change.
## A Experimental Details
For each dataset, we use a BERT model, preferring a high quality monolingual model where available. For GEMS and SE English, we use bert-large-uncased. For SE Latin we use bert-base-multilingual-uncased, deepset/gbert-large for SE German, and KB/bert-base-swedish-cased for SE Swedish, with all models available through HuggingFace. In all cases, we first adapt the model to the dataset by doing continued masked language model training for five epochs on the union of the two associated corpora.
For the SemEval data, the corpora are provided in both raw and lemmatized formats, with the target terms given as lemmas. Because the contextual embedding models have been trained on nonlemmatized text, we prefer to embed mentions using the raw (non-lemmatized data). However, because of uncertainty about how the raw text was lemmatized, we begin by aligning the lemmatized data to the non-lemmatized text. We then index terms in the lemmatized data (for both target terms and random background terms), and then map these indices to indices in the corresponding non-lemmatized data, which we then sample to get replacements.
To do the alignment, we begin by tokenizing the text, and then removing the punctuation from both the lemmatized and non-lemmatized text, storing indices to allow mapping back to the original token sequences in the non-lemmatized data. For each pair of texts (a raw and a lemmatized form), we first identify tokens that occur exactly once in each, and align the positions of these to each other, as long as the ordering of these tokens is consistent. We then recursively do this for the subsequences between each adjacent pair of aligned tokens. Given these landmark alignments, (using exact matches), we then attempt to align all remaining substrings between each pair of aligned tokens, (adding padding tokens as necessary), using Levenshtein distance as a heuristic way to evaluate possible token alignments. Finally, we do a post-alignment correction to consider inserting a padding token in each position to correct for occasional off-by-one errors, and taking the best scoring overall alignment.
By inspecting the target tokens in the raw (nonlemmatized text) that are obtained using this alignment (based on indexing target terms in the lemmatized version, then mapping these indices to the non-lemmatized text using the alignment), we find that the vast majority of mentions are properly aligned. To eliminate the small number of alignment errors, we only keep tokens that are at least two characters in length where the non-lemmatized form comprises at least 0.02% of the total number of indexed terms for a given lemma, and where the first letter of the indexed token matches the first letter of the target lemma. To account for a small number of special cases (such as examples in SE
Latin where a word sometimes starts with "j" and sometimes with "i", (presumably due to OCR errors), we create a handful of exceptions to the first letter rule. For full details of this alignment process and exceptions, please refer to replication code.7 In addition, for the SE English data, target terms
(only) are given with specific part of speech tags.
However, to better match a random sample of background lemmas, we ignore part of speech in our experiments, and index all occurrences of each target term in the lemmatized data. Future work could explore the impact of restricting measurements to certain parts of speech, both for target and background terms.
For GEMS, where the targets are not lemmatized, we ignore lemmatization and simply sample from all exact matches of the target terms as tokens in the raw text. As with past work, we combine the multiple annotations for the GEMS data by averaging their scores.
All masked tokens are fed into the appropriate model with up to 50 tokens to either side from the original context, which returns a probability distribution over the model vocabulary. When computing the top-k most probable substitutes, we follow Eyal et al. (2022) and exclude stopwords and partial word pieces (i.e., those that start with \#\#). For GEMS and SE English, we use the stopword list from the Snowball stemmer.8 For SE Latin, we use a Latin stopword list from the Perseus Digital Library.9 For SE German and SE Swedish, we use the respective stopword lists from NLTK.10 For the exploration of sense clusters in the main paper using Louvain community detection, we use the same data as used in measuring semantic change, keeping k = 5, but we exclude the target term itself when gathering the top-k substitutes.11 We then construct a weighted graph for each target term, where nodes represent substitutes, and edge weights correspond to the number of top-k replacement sets in which each pair of replacements appear together.
To obtain sense clusters, we use the implementation of Louvain community detection in networkx with default parameter settings, to detect clusters in the graph.12 Finally, we associate each instance of a target term with a corresponding cluster using Jaccard similarity between the instance's set of top-k replacements and the terms in the cluster.
All of these experiments were run on either an NVidia RTX A6000 or A5000 GPU.
## B Alternative Models
In order to investigate the effect of model size on the performance of our approach to measuring semantic change, we try a range of model sizes for BERT on the English datasets, all available from HuggingFace. The results are shown in Table 3.
As can be seen, there is a clear correlation between model size and task performance for the SE English data, but this is not the case for the GEMS
dataset, perhaps because the COHA corpora used for GEMS provides longer contexts for term mentions (see Appendix C).
We also demonstrate the effect of using a multilingual model, rather than a language specific model, for all datasets other than SE Latin (for which we are already using a multilingual model in the main paper). As can be seen in Table 4, the multilingual model uniformly results in worse performance, demonstrating the importance of having a strong language-specific model for measuring semantic change in this way.
## C Exploring Performance Differences Across Languages
Using the method presented in the main paper, our results were better than using static word vectors for English and Latin, but worse for German and Swedish. Unfortunately, we do not yet have a satisfactory explanation for this discrepancy in performance. Notably, other approaches using contextual embeddings (e.g., Montariol et al., 2021), have also performed worse on these languages (relative to approaches based on Hamilton et al., 2016b).
Several possible explanations suggest themselves for why methods based on contextual embeddings might struggle. For example, tokenization used for these models breaks some words into multiple word pieces, which is not an issue for static embeddings. Another consideration is the amount of context in which the examples occur in the reference corpora (since static vectors typically only use very small context windows, whereas contextual embedding models are capable of using much longer contexts). We might also consider factors relevant to all methods, such as the number of examples given for each target term, or the number of different word forms in which each lemma occurs in the corpora provided.
Although several of these factors perhaps help to explain why performance on English is especially good (relative to static vectors), they do not provide a convincing way to explain the differences in performance observed on the other languages. In particular, the SE English data has the highest proportion of target words that occur in the model vocabulary (without being broken into multiple word pieces), and these lemmas occur in text using the fewest number of surface forms per target.
By contrast, the other languages tend to have more surface forms, on average, with fewer of the target terms occurring in the corresponding model vocabulary, but Swedish is mid-range on the later
(with German being lowest). Latin, by contrast, tends to have more examples of target terms per corpus in both time periods (with German again the lowest), but Swedish is between English and Latin.
The Swedish model does have a larger vocabulary, but it is not as large as the multilingual model we used for Latin. Quantitative summaries of these factors are presented for reference in Table 5.
Ultimately, perhaps the best explanation has to do with the quality of the underlying pretrained models available for each language. Given that different models for different languages were trained on entirely different data, this seems like a highly relevant source of potential differences. Unfortunately, is it difficult to assess the overall quality of pretrained models across languages, so all of these explanations essentially remain no more than hypotheses for further investigation.
Table 3: Results on the English datasets (Spearman correlation) using a range of BERT model sizes on HuggingFace.
| Model | GEMS | SE English |
|--------------------------------------------|--------|--------------|
| google/bert_uncased_L-4_H-256_A-4 (mini) | 0.559 | 0.433 |
| google/bert_uncased_L-4_H-512_A-8 (small) | 0.544 | 0.495 |
| google/bert_uncased_L-8_H-512_A-8 (medium) | 0.538 | 0.522 |
| google/bert_uncased_L-12_H-768_A-12 (base) | 0.541 | 0.512 |
| bert-base-uncased | 0.509 | 0.525 |
| bert-large-uncased | 0.535 | 0.547 |
| **SE Eng** | **SE Ger** | **SE Swe** | |:--------------|:--------------|:--------------| | 0.480 | 0.481 | 0.209 | | 0.547 | 0.563 | 0.310 | | \.
$$0.524$$
$\epsilon$ 4.
Model GEMS SE Eng SE Ger SE Swe
bert-base-multilingual-uncased 0.524 0.480 0.481 0.209
Language specific model (from Table 1 in main paper) 0.535 0.547 0.563 0.310
Table 4: Results when using a multilingual model, compared to the language specific models used in the paper.
Table 5: Quantitative summary statistics of various factors which we might be expected to affect differences in performance across languages (relative to approaches based on static word embeddings). Median lower target count is the median across target terms of the number of examples of each target term in the corpus with the lower count
(early or later). Median target forms is the median across examples of the number of surface forms corresponding to each target lemma. Median context length is the median number of tokens in which target terms occur. % targets as whole words is the percent of target terms which exist in the model vocabulary. Vocab size is the number of words in the model vocabulary. Ultimately, none of these provides a convincing explanation for observed differences.
| Median | | | | | | |
|-----------------------|--------------------------------|--------------|--------|-----|------|--------|
| Dataset | Model | Median lower | target | | | |
| target count | forms | % targets | Vocab | | | |
| as whole | size | | | | | |
| words | | | | | | |
| GEMS | bert-large-uncased | 93 | 1 | 191 | 97.0 | 30522 |
| SE Eng | bert-large-uncased | 209 | 4 | 26 | 95.6 | 30522 |
| SE Ger | deepset/gbert-large | 101 | 7 | 28 | 22.9 | 31102 |
| SE Lat | bert-base-multilingual-uncased | 472 | 8 | 28 | 25.0 | 105879 |
| SE Swe | KB/bert-base-swedish-cased | 249 | 9 | 25 | 74.2 | 50325 |
| Median context length | | | | | | |
## D Additional Results On Gems
The GEMS dataset has been used for evaluation by many additional papers, beyond those discussed in the main body of this paper. However, these have not all used consistent metrics and corpora, making comparison difficult. For completeness, we include additional results here, as shown in Table 6.
The GEMS dataset was originally introduced by Gulordava and Baroni (2011), from whom we obtained the labeled data. These authors reported results in terms of Pearson correlation, and used multiple datasets for measuring semantic change, including the Google Books Corpus. Frermann and Lapata (2016) also used this dataset for evaluation, but used different additional data (beyond COHA), and reported results in terms of Spearman correlation.
More recent papers using this dataset (from Giulianelli et al., 2020 onwards) have tended to make use of the COHA data from the 1960s and 1990s as the corpus in which to measure change, to correspond to the periods used in the annotation process, which we also use for our results in this paper. Martinc et al. (2020b) reported very strong results on this dataset, but subsequent work from the same authors (Montariol et al., 2021) revealed that this method performed relatively poorly on the SemEval datasets, as reported in Table 1 in the main paper.
Table 6: Additional results on the GEMS dataset from Gulordava and Baroni (2011). Note that not all papers reporting results on this dataset used the same corpora or evaluation metric, hence we report both Pearson and Spearman correlation, and restrict ourselves to the COHA dataset, which was used by all authors. Numbers in brackets show the number of target terms excluded.
We evaluate using the exclusions of both Giulianelli et al. (2020) [99] and Martinc et al. (2020b) [96] to enable a full comparison. Note that the high correlation reported on this dataset by Martinc et al. (2020b) did not seem to transfer to the SemEval datasets, as shown by Montariol et al. (2021) and Table 1 in the main paper.
| Paper | Pearson | Spearman |
|--------------------------------|-----------|------------|
| Gulordava and Baroni (2011) | 0.386 | - |
| Frermann and Lapata (2016) | - | 0.377 |
| Giulianelli et al. (2020) [99] | 0.231 | 0.293 |
| Martinc et al. (2020b) [96] | 0.560 | 0.510 |
| Montariol et al. (2021) [96] | - | 0.352 |
| Scaled JSD [96] | 0.532 | 0.535 |
| Scaled JSD [99] | 0.541 | 0.553 |
Different authors have excluded different numbers of words from the 100 target terms in evaluation. Giulianelli et al. (2020) excluded *extracellular* due to insufficient occurrences in COHA during the 1960 and 1990s, which we also exclude for the same reason. Martinc et al. (2020b) and Montariol et al. (2021) excluded assay, extracellular, *mediaeval*, and *sulphate* because they were split into multiple tokens by BERT. Because we mask the target terms, multi-piece words are not a problem, but for completeness we evaluate using the exclusions of both Giulianelli et al. (2020) and Martinc et al. (2020b) and report both in Table 6.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5 and the Limitations Section on Page 5
✗ A2. Did you discuss any potential risks of your work?
Yes, in the Ethical Considerations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1
✓ B1. Did you cite the creators of artifacts you used?
Section 4.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We are not creating new artifacts
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We are only using well established resources for their intended purposes
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We are only using well established resources for their intended purposes
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We are only using well established resources for their intended purposes
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Number of words per dataset in Table 1, along with pointers to dataset descriptions
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Partially; we report the computing infrastructure used in Appendix B; the relevant parameter counts are all based on standard models The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Yes, we note that we did not tune the relevant hyperparameters in the Limitations section
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
The datasets that are available are too small to be able to provide meaningful estimates for these.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4.2 and Appendix B
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
kondo-etal-2023-probing | Probing Physical Reasoning with Counter-Commonsense Context | https://aclanthology.org/2023.acl-short.53 | In this study, we create a CConS (Counter-commonsense Contextual Size comparison) dataset to investigate how physical commonsense affects the contextualized size comparison task; the proposed dataset consists of both contexts that fit physical commonsense and those that do not. This dataset tests the ability of language models to predict the size relationship between objects under various contexts generated from our curated noun list and templates. We measure the ability of several masked language models and encoder-decoder models. The results show that while large language models can use prepositions such as {``}in{''} and {``}into{''} in the provided context to infer size relationships, they fail to use verbs and thus make incorrect judgments led by their prior physical commonsense. | # Probing Physical Reasoning With Counter-Commonsense Context
Kazushi Kondo,1 Saku Sugawara,2 **Akiko Aizawa**2 1The University of Tokyo, 2National Institute of Informatics [email protected], {saku,aizawa}@nii.ac.jp
## Abstract
In this study, we create a CConS (Countercommonsense Contextual Size comparison)
dataset to investigate how physical commonsense affects the contextualized size comparison task; the proposed dataset consists of both contexts that fit physical commonsense and those that do not. This dataset tests the ability of language models to predict the size relationship between objects under various contexts generated from our curated noun list and templates. We measure the ability of several masked language models and generative models. The results show that while large language models can use prepositions such as "in" and
"into" in the provided context to infer size relationships, they fail to use verbs and thus make incorrect judgments led by their prior physical commonsense.
## 1 Introduction
Humans possess physical commonsense regarding the behavior of everyday objects. Physical commonsense knowledge is relevant to their physical properties, affordances, and how they can be manipulated (Bisk et al., 2020). While a significant amount of physical commonsense can be expressed in language (Forbes and Choi, 2017; Bisk et al.,
2020), direct sentences describing facts such as
"people are smaller than houses" rarely appear because of reporting bias (Gordon and Van Durme, 2013; Ilievski et al., 2021). Recent language models have succeeded in tasks that do not require contextual reasoning, such as size comparison and prediction of event frequency (Talmor et al., 2020).
However, what about inferences that are contextdependent? Whether a language model can make correct inferences in various contexts is important because physical reasoning is highly contextdependent (Ogborn, 2011). Several studies on contextual physical reasoning (Forbes et al., 2019; Bisk et al., 2020; Aroca-Ouellette et al., 2021; 603
![0_image_0.png](0_image_0.png)
Zellers et al., 2021) have been conducted to produce datasets that assess the ability to recognize physical situations described in writing. Without context, however, these datasets may be answered by commonsense.
Humans also can reason in ways that differ from simply using commonsense. For instance, if the context "there is a house inside a light bulb." is provided, humans can still imagine the situation and reason that the bulb must be larger than the house. In other words, commonsense is just a sweeping generalization, and reasoning about context must be independent of commonsense. This reasoning with defeasibility, which reflects the ability to reason logically without relying only on commonsense, seems to have been overlooked in the study of language models compared to the acquisition of commonsense. Previous investigations of contextual physical reasoning (Aroca-Ouellette et al., 2021; Yu et al., 2022) failed to distinguish physical reasoning from the simple use of physical commonsense. To appropriately measure physical reasoning ability, we must use contexts that go against commonsense to rule out the possibility that the model is overconfident in physical commonsense.
In this study, we investigate the behavior of the language model concerning physical commonsense given the context of a situation that contradicts commonsense. We choose the size comparison task despite various possible domains of physical commonsense (Ilievski et al., 2021). The task is one of the easiest physical commonsense reasoning tasks for language models (Forbes and Choi, 2017; Goel et al., 2019), and it is also easy to add a context to change the relationship between sizes. For example, in this study, the context is a sentence that implies a size relationship, such as "<obj1>
contains <obj2>."
For this purpose, we created a new dataset, CConS (Counter-commonsense Contextual Size comparison)1. This dataset contains 1,112 sentences generated from 139 templates and tests the ability of language models to infer the size relationship between objects using a cloze-style prompt.
Figure 1 shows the size comparison examples with or without contexts that (do not) agree with ordinary commonsense. Our experiments using recent language models show that GPT-3(text-davinci003) (Brown et al., 2020) correctly reasons in context when it is consistent with commonsense, yielding 85% accuracy. In contrast, even GPT-3 can only show poor performance (41 % accuracy) for examples that contradict commonsense. This suggests that the models may not effectively distinguish between physical commonsense and inferences based on contexts, leading to incorrect predictions. Nevertheless, when prepositions hint at the relationships, the accuracy rate exceeded 55%, even for counter-commonsense examples. In summary, our counter-commonsense examples reveal the difference in influence between prepositions and verbs in contextualized physical reasoning.
The contributions of this study are as follows:
1. We create a dataset that assesses size comparison ability more precisely by contrasting examples that conform to physical commonsense with ones that do not.
2. We show that physical commonsense prevents measuring the language models' ability of contextual physical reasoning.
1https://github.com/cfkazu/
Counter-Commonsense-Context 3. We demonstrate that even large models perform poorly when making inferences that violate physical commonsense. Specifically, they struggle to infer size relations implied by verbs and can infer only when prepositions indicate.
## 2 Related Works
Size Comparison Task The size comparison task, which previous studies (Yang et al., 2018; Goel et al., 2019) investigated since the earlier linguistic representations, such as GloVe (Pennington et al., 2014) or ELMo (Peters et al., 2018), is one of the easiest physical common-sense inference tasks for language models (Forbes and Choi, 2017; Goel et al., 2019). While there are many prior studies
(Elazar et al., 2019; Zhang et al., 2020) on this topic, VerbPhysics (Forbes and Choi, 2017) is the most similar to this study in that it focuses on the relationship between sizes and verbs. There are also some other approaches, such as methods that extract external knowledge (Elazar et al., 2019),
filling-masks (Talmor et al., 2020), or generate images (Liu et al., 2022). These results suggest that the commonsense of comparing object size is encoded in recent language models. However, these studies do not consider the context that might influence the results of size comparisons.
Defeasible Reasoning According to Koons
(2022), defeasible reasoning is an argument that is rationally persuasive but not completely valid as a deduction. This defeasible reasoning is similar to the subject of this study in that it involves the recognition that commonsense and assumptions in a given context are not entirely correct propositions.
Therefore, this study can be seen as an investigation into whether a language model can capture commonsense as defeasible reasoning. The creation of a dataset dealing with defeasible reasoning has been discussed by Rudinger et al. (2020) and Allaway et al. (2022). Our study is similar to Allaway et al. (2022) in that it generates sentences that violate the context by fitting words to a template.
However, this study differs in that we also generate examples contrary to commonsense for measuring the actual performance of the language model as well as the differences from the ordinary case.
## 3 Dataset Creation
In this study, we create 139 templates and automatically generate 1,112 examples. Table 1 lists
| Template | Generated: Ordinary Examples | Generated: Counter-Commonsense Examples |
|-------------------------------|----------------------------------|-------------------------------------------|
| He found <portable> in <box>. | He found a key in a key box. | He found a monitor in a key box. |
| <box> contains <portable>. | A key box contains a key. | A key box contains a monitor. |
| <*> fills <box>. | A marble fills a bin. | A refrigerator fills a bin. |
| <*> is covered by <flat>. | A pen is covered by a newspaper. | A desk is covered by a handkerchief. |
Table 1: Examples of the templates. <tag> constrains possible nouns to be filled. For example, <box> means that the noun entering there must have the attribute "box," that is, it must be able to hold things. <*> indicates that any words in the noun list (only material nouns) can be inserted.
## Examples Of These Templates.
Designing Template We focus on the comprehensiveness of verb phrases while designing templates to ensure that the choice of verbs is not arbitrary. Therefore, we extract 139 verb phrases that indicate size relationships from the Oxford 5000 dictionary 2and manually assemble simple sentences. For example, the statement "<obj1> beats
<obj2>" is not included in this template because this statement is not informative enough to determine a size relation.
Moreover, in comparing sizes, we also notice not only verbs but the usage of prepositions such as "in" or "into" may provide clear clues about the size relationships. Therefore, we select templates that contain only examples with these prepositions and distinguish them as easy templates from those that do not as hard templates. In subsequent experiments, we also investigate the effect of this difference on the behavior of the language model.
Restriction on Noun If nouns are arbitrarily inserted, the resulting sentences may be nonsensical or impossible for a human to imagine. For example, we choose not to include the sentence "the stone threw the dog" because it is beyond imagination.
We place restrictions on the nouns used in the sentence templates by defining tags to avoid this nonsense. A single placeholder can have constraints (multiple tags). There are 18 types of tags, including "have_hands," "box," and "portable."
Tags are manually determined to abstract the properties of verb phrases. We also use the Oxford 5000 dictionary to obtain a list of nouns referring to physical objects. One of the nouns that satisfy all constraints is randomly selected from a list of 195 nouns and inserted.
Generating Sentences The template tags are replaced with the corresponding nouns to generate the context, and the questions asking for size comparisons are combined. For example, the contextualized question text provided to the masked language models is as follows:
"«context» In this situation, the size of <obj1> is probably much [MASK] than the size of <obj2>."
Contexts and questions are used to generate input for each of the masked language models and generative models. We classify generated sentences to the Ordinary or Counter-Commonsense (CCommon) subset based on whether the size relationship between objects indicated by the template accords commonsense.
## 4 Experiment
Task Definition We measure the ability of masked language models and generative models to recognize size relationships by providing sentences for each architecture. These sentences are generated from templates (Section 3). We also see how the language model's behavior changes when context sentences follow or do not follow a general common-size relationship.
Comparison Aspects We investigate how language models create physical reasoning without being biased by their prior physical commonsense.
1. How do the physical reasoning results of the language model change when contexts are consistent or inconsistent with commonsense?
2. How does the performance of a language model change when comparing an easy dataset that contains certain prepositions that hint at size relationships with a hard dataset that does not?
Model Settings In this study, BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and ALBERT (Lan et al., 2020) are used to assess the performance of the masked language models. We also investigate how the size of the model affects physical reasoning. We choose T0 (Sanh et al.,
2022) and GPT-3(text-davinci-003) to evaluate the performance of the generative model.
According to Talmor et al. (2020), RoBERTaLarge outperforms BERTs and RoBERTa-Base in a no-context size comparison task. Proceeding from this analysis we attempt to detect whether commonsense influences physical reasoning by giving examples contrary to commonsense as context.
Tasks Format Details The tasks are performed by inputting sentences according to the format defined for each of the models, as follows.
## Format For Masked Language Models
WithContext: «context» In this situation, the size of <obj1>
is probably much [MASK] than the size of <obj2>.
WithoutContext: The size of <obj1>
is probably much [MASK] than the size of <obj2>.
The candidates for [MASK] are "larger," "bigger," "smaller," and "shorter." If the sum of the probabilities of the first two options exceeds 0.5, language models predict that obj1 is larger than obj2. Therefore, the language model always makes binary decisions.
## Format For Generative Models
WithContext: «context» Which is bigger in this situation, <obj1> or <obj2>?
WithoutContext: Which is bigger in general, <obj1> or <obj2>?
«context» is a sentence generated from templates.
Human Evaluation We ask crowdworkers to perform the same size comparison task to measure the accuracy of humans in this task. Thus, we can test the validity of the automatically generated questions. The crowdworkers are given the same context and make a choice that is larger. (See Appendix B for details.) Five crowdworkers are assigned to each question. We use some intuitive examples, such as "<obj1> contains <obj2>," which are provided for qualification, and exclude those who get such examples wrong or choose the same answer for all examples.
Model Ordinary CCommon NoCon BERT-B 0.483 0.515 0.495 BERT-L 0.500 0.521 0.494 RoBERTa-B 0.554 0.443 0.507
RoBERTa-L 0.692 0.413 0.639
ALBERT-B 0.500 0.521 0.494
ALBERT-XXL 0.720 0.346 0.701
T0++ 0.682 **0.530** 0.589
T0 0.684 0.443 0.574
GPT-3 **0.856** 0.415 **0.764**
Human 0.814 0.798 0.791
Table 2: The inference results of the language model for data sets where the context follows and does not follow commonsense and context is removed.
## 5 Result And Analysis
Tables 2 and 3 exhibit the performance of the language model on our datasets. GPT-3 outperforms other models in Ordinary and NoCon setups.
RoBERTa-Large and ALBERT-XXLarge show better reasoning ability than the other masked language models in the Ordinary dataset. However, for the CCommon dataset, the performance of the pretrained language model decreases, particularly in ALBERT-XXLarge. This result suggests that commonsense built into the model hinders its ability to make accurate judgments. Other models struggle to capture size relationships. These results without context (NoCon) are generally consistent with the findings of a previous investigation of the nocontext size comparison task conducted by Talmor et al. (2020).
In some CCommon examples, BERT performs better than RoBERTa. This may be because BERT is less equipped with commonsense, allowing it to make simpler judgments without being influenced.
Impact of Prepositions Prepositions did not significantly impact the prediction for the masked language models in the Ordinary dataset. However, there is a significant difference in the correct response rates in the CCommon dataset. RoBERTaLarge performs well in easy data, regardless of whether the context defies commonsense. This result indicates that RoBERTa-Large recognizes the connection between the prepositions and size relationships. The ALBERT-XXLarge model does not perform well for the CCommon dataset, even if the setting is easy; therefore, we consider that it merely answers according to commonsense rather than making inferences. In short, context is not useful for ALBERT when the prepositions do not
| Ordinary | CCommon | | | |
|------------|-----------|-------|-------|-------|
| Model | Easy | Hard | Easy | Hard |
| BERT-B | 0.506 | 0.471 | 0.460 | 0.557 |
| BERT-L | 0.527 | 0.479 | 0.480 | 0.553 |
| RoBERTa-B | 0.557 | 0.550 | 0.473 | 0.419 |
| RoBERTa-L | 0.711 | 0.671 | 0.467 | 0.369 |
| ALBERT-B | 0.527 | 0.479 | 0.480 | 0.553 |
| ALBERT-XXL | 0.744 | 0.693 | 0.353 | 0.346 |
| T0++ | 0.762 | 0.607 | 0.593 | 0.480 |
| T0 | 0.726 | 0.638 | 0.473 | 0.424 |
| GPT-3 | 0.940 | 0.788 | 0.567 | 0.296 |
| Human | 0.835 | 0.796 | 0.829 | 0.769 |
## Provide Direct Hints.
GPT-3 uses prepositions more effectively than other models and performs better on the Easy dataset, while the model struggles to answer the CCommon dataset in the hard setting. This result means GPT-3 learns commonsense well but cannot make physical logical inferences.
## 6 Conclusion
We develop a method providing a countercommonsense context to measure physical reasoning ability. Our proposed contextualized physical commonsense inference dataset reveals that current language models can partially predict size relations but do not perform as well as humans in contexts that contradict commonsense. These judgments are possible to a limited extent in the presence of certain prepositions such as "in" and "into." While we focused on size comparison tasks in this study, the importance of context in physical reasoning is not limited to this task. Increasing the size and scope of the datasets for contextual commonsense inference is necessary to build language models that more closely resemble humans and differentiate between general commonsense and the facts at hand.
## Limitations
The main limitation of our method is that it requires human effort to increase the variety of templates, which makes it difficult to create large datasets.
Using templates to generate data reduces the time required to create data manually, but the need for human labor remains an obstacle. To resolve this, the templates themselves need to be generated automatically, although the tags that constrain the nouns also need to be generated automatically, which is a difficult problem.
## Acknowledgment
We would like to thank anonymous reviewers for their valuable comments and suggestions. This work was supported by JST PRESTO Grant Number JPMJPR20C4 and JSPS KAKENHI Grant Number 21H03502.
## References
Emily Allaway, Jena D. Hwang, Chandra Bhagavatula, Kathleen McKeown, Doug Downey, and Yejin Choi. 2022. Penguins Don't Fly: Reasoning about Generics through Instantiations and Exceptions.
ArXiv:2205.11658 [cs].
Stéphane Aroca-Ouellette, Cory Paik, Alessandro Roncone, and Katharina Kann. 2021. PROST: Physical Reasoning about Objects through Space and Time.
In *Findings of the Association for Computational* Linguistics: ACL-IJCNLP 2021, pages 4597–4608, Online. Association for Computational Linguistics.
Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: Reasoning about Physical Commonsense in Natural Language.
Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7432–7439. Number: 05.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language Models are Few-Shot Learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran, Tania Bedrax-Weiss, and Dan Roth. 2019. How Large Are Lions? Inducing Distributions over Quantitative Attributes. In *Proceedings of the 57th Annual*
Meeting of the Association for Computational Linguistics, pages 3973–3983, Florence, Italy. Association for Computational Linguistics.
Maxwell Forbes and Yejin Choi. 2017. Verb Physics:
Relative Physical Knowledge of Actions and Objects.
In *Proceedings of the 55th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 266–276, Vancouver, Canada.
Association for Computational Linguistics.
Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2019.
Do Neural Language Representations Learn Physical Commonsense? *Proceedings of the 41st Annual* Conference of the Cognitive Science Society., page 7.
Pranav Goel, Shi Feng, and Jordan Boyd-Graber. 2019.
How Pre-trained Word Representations Capture Commonsense Physical Comparisons. In *Proceedings of* the First Workshop on Commonsense Inference in Natural Language Processing, pages 130–135, Hong Kong, China. Association for Computational Linguistics.
Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Automated knowledge base construction, AKBC '13, pages 25–30, New York, NY, USA. Association for Computing Machinery.
Filip Ilievski, Alessandro Oltramari, Kaixin Ma, Bin Zhang, Deborah L. McGuinness, and Pedro Szekely.
2021. Dimensions of commonsense knowledge.
Knowledge-Based Systems, 229:107347.
Robert Koons. 2022. Defeasible Reasoning. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, summer 2022 edition. Metaphysics Research Lab, Stanford University.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations.
Xiao Liu, Da Yin, Yansong Feng, and Dongyan Zhao.
2022. Things not written in text: Exploring spatial commonsense from visual signals. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2365–2376, Dublin, Ireland. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv:1907.11692 [cs].
Jon Ogborn. 2011. Science and commonsense. *Revista* Brasileira de Pesquisa em Educação em Ciências, 6.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word
Representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.
Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Thinking Like a Skeptic: Defeasible Inference in Natural Language.
In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4661–4675, Online.
Association for Computational Linguistics.
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In *International Conference on Learning* Representations.
Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. oLMpics-on what language model pre-training captures. *Transactions of the Association for Computational Linguistics*, 8:743–758.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yiben Yang, Larry Birnbaum, Ji-Ping Wang, and Doug Downey. 2018. Extracting Commonsense Properties from Embeddings with Limited Human Guidance. In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 2:
Short Papers), pages 644–649, Melbourne, Australia.
Association for Computational Linguistics.
Model Model-FullName BERT-B bert-base-uncased BERT-L bert-large-uncased RoBERTa-B roberta-base
RoBERTa-L roberta-large
ALBERT-B albert-base-v2
ALBERT-XXL albert-xxlarge-v2 T0++ bigscience/T0pp
T0 bigscience/T0
Table 4: Paths for using the Hugging Face models used in this study. These models were used without modification.
Samuel Yu, Peter Wu, Paul Pu Liang, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2022. PACS: A
Dataset for Physical Audiovisual CommonSense Reasoning. In *Computer Vision - ECCV 2022*, Lecture Notes in Computer Science, pages 292–309, Cham.
Springer Nature Switzerland.
Rowan Zellers, Ari Holtzman, Matthew Peters, Roozbeh Mottaghi, Aniruddha Kembhavi, Ali Farhadi, and Yejin Choi. 2021. PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2040–2050, Online. Association for Computational Linguistics.
Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth. 2020. Do Language Embeddings capture Scales? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 292–299, Online. Association for Computational Linguistics.
## A Experiment Details
We used a language model published on hugging face Transformers (Wolf et al., 2020) except GPT3 under MIT (RoBERTa) or Apache-2.0 (BERT,
ALBERT, T0, T0++) license. For GPT-3, the OpenAI API (text-davinci-0033) is used. All of these models are designed to solve downstream natural language tasks. Table 4 lists the paths for accessing the models via hugging face.
We use a GPU Tesla V100-PCIE-32GB. The total computation time was 1 hour for the masked language models and 2 hours for the generative models.
## B Human Evaluation Details
We evaluate human accuracy in a size comparison task using Amazon Mechanical Turk. We provide 3https://platform.openai.com/docs/
models/gpt-3-5 the following instructions and let the crowdworkers choose their answers: We calculate the reward as $15 per hour. Figure 2 shows the instructions for the contextualized size comparison task. The choices are virtually two-option questions, except
"I can't imagine the situation," etc. Figure 3 shows the instructions for the non-contextualized size comparison task. The choices are "obj1","obj2,"
and "N/A (cannot determine)."
No personal information is obtained. Crowdworkers live in the United Kingdom, the United States, and Canada. By accepting Amazon Mechanical Turk's participation agreement 4, crowdworkers consent to the collection and use of nonpersonal data for research purposes.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
The paper is about the simple task of comparing the sizes of two objects, and we believe there is no such risk.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1 and Abstract
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3,4, Appendix A
✓ B1. Did you cite the creators of artifacts you used?
Section 1,4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix A
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 3
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3, Appendix A
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3
## C ✓ **Did You Run Computational Experiments?** 4, Appendix A, B
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4, Appendix A
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
4, Appendix C
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix C
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Appendix C
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix C
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
The task is a simple task of comparing the sizes of two objects and obviously does not pose any problems with user safety, health, or personal information.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Appendix C |
guriel-etal-2023-morphological | Morphological Inflection with Phonological Features | https://aclanthology.org/2023.acl-short.54 | Recent years have brought great advances into solving morphological tasks, mostly due to powerful neural models applied to various tasks as (re)inflection and analysis. Yet, such morphological tasks cannot be considered solved, especially when little training data is available or when generalizing to previously unseen lemmas. This work explores effects on performance obtained through various ways in which morphological models get access to sub-character phonological features that are often the targets of morphological processes. We design two methods to achieve this goal: one that leaves models as is but manipulates the data to include features instead of characters, and another that manipulates models to take phonological features into account when building representations for phonemes. We elicit phonemic data from standard graphemic data using language-specific grammars for languages with shallow grapheme-to-phoneme mapping, and we experiment with two reinflection models over eight languages. Our results show that our methods yield comparable results to the grapheme-based baseline overall, with minor improvements in some of the languages. All in all, we conclude that patterns in character distributions are likely to allow models to infer the underlying phonological characteristics, even when phonemes are not explicitly represented. | # Morphological Inflection With Phonological Features
David Guriel, Omer Goldman, Reut Tsarfaty Bar-Ilan University
{davidgu1312,omer.goldman}@gmail.com,[email protected]
## Abstract
Recent years have brought great advances into solving morphological tasks, mostly due to powerful neural models applied to various tasks as (re)inflection and analysis. Yet, such morphological tasks cannot be considered solved, especially when little training data is available or when generalizing to previously unseen lemmas. This work explores effects on performance obtained through various ways in which morphological models get access to subcharacter phonological features that are often the targets of morphological processes. We design two methods to achieve this goal: one that leaves models as is but manipulates the data to include features instead of characters, and another that manipulates models to take phonological features into account when building representations for phonemes. We elicit phonemic data from standard graphemic data using language-specific grammars for languages with shallow grapheme-to-phoneme mapping, and we experiment with two reinflection models over eight languages. Our results show that our methods yield comparable results to the grapheme-based baseline overall, with minor improvements in some of the languages. All in all, we conclude that patterns in character distributions are likely to allow models to infer the underlying phonological characteristics, even when phonemes are not explicitly represented.
## 1 Introduction
In recent years, morphological tasks received much attention in NLP through various tasks such as
(re)inflection, lemmatization and others, specifically through the SIGMORPHON shared tasks
(Cotterell et al., 2016, 2017, 2018; McCarthy et al.,
2019; Vylomova et al., 2020; Pimentel et al., 2021). State-of-the-art models seem to achieve quite high results in such cross-lingual evaluation campaigns, although recent works showed that there is still room for improvements (Goldman et al., 2022).
Most studies aiming at morphological tasks design models that operate at the character level, without reference to the phonological components that compose the phonemes represented by the characters.1 This is despite the fact that many morphological processes have distinct phonological features, rather than phonemes, as either the trigger or target of morphological processes. For example, in vowel harmony, a single feature of a vowel in the stem determines the vowels that appear in the affixes added to that stem. Without direct evidence of the phonological features composing every phoneme, models must resort to memorizing groups of phonemes that pattern together for an unobserved reason.
In this work we hypothesize that explicitly inputting models with phonological features will lead to better modelling of morphological tasks. We set out to equip models with two alternative methods for incorporating that information. One method replaces the character-level tokens with phonological feature tokens; and another one equips the model with a self-attention mechanism that learns representation of phonemes from their features.
We implement these methods on the task of morphological reinflection, where forms of the same lemma are inflected from one another. We experiment with 8 languages and 2 models: an LSTM encoder-decoder with global attention by Silfverberg and Hulden (2018); and a transducer by Makarov and Clematide (2018) that predicts edit actions between source and target word-forms and is suitable for lower amounts of training data.
Our experiments show that the proposed methods yield results comparable to the grapheme-based baseline setting for the transducer model. On average across languages, the best phonologicallyaware method suffered from a drop of 2.8 accuracy points, although the performance on some individual languages marginally improved. We thus 1Some exceptions do exist, like Malouf (2017)'s model that operates over phonemes rather than characters.
conjecture that the phonological characteristics are already encoded in the graphemic representations elicited by this model. The results of this work are in line with other works, performed in different settings, investigating the role of phonology in morphological models (see Section 6).
We further note that the LSTM model, unlike the transducer, did not perform well on graphemic data and suffered from a severe drop when applied on phonological data in all tested languages.
We attribute this to the transducer's attested ability to perform well particularly in low-resource setting. We subsequently conjecture that, for the phonologically-aware variant of the reinflection task, standard amounts of reinflection data should be effectively considered low-resourced.
## 2 Morpho-Phonological Processes
Utterances in natural language - sentences and words - are composed of phonemes. Yet, one can further decompose phonemes to their very atomic elements: *phonological distinctive features*.
A phonological feature is the minimal unit within a phoneme that distinguishes it from other phonemes.
Every phoneme can be described as a unique combination of such features. Vowels, for example, are said to take the features: *backness* of the tongue, height of the lower jaw, and *roundness* of the lips; the sound /a/ then has the values front, *open* and unrounded. Consonants usually take the features: place of articulation, *manner of articulation* and voiceness, e.g. /g/ has the values velar, *plosive* and voiced.
2 Many languages exhibit morphological processes whose target or trigger are phonological features. For instance, Turkish has vowel harmony at the backness feature: the stem's last vowel controls (*harmonizes*) the backness of other vowels in morphemes added to that stem. Table 1 illustrates the alternation for future tense inflection. For ol, the future morpheme includes the back vowel /a/,
according to the backness of the vowel /o/. In öl, however, the vowel /œ/ is front, so the morpheme includes the front vowel /e/.
When a character-level inflection model learns this process, it has to memorize the relation between the letters representing vowels of the same backness (including 4 back vowels and 4 front vow-2The features of vowel and consonants are not unrelated.
For example, *place of articulation* and *backness* are essentially aliases for the same physical feature.
| Stem | Future Tense | |
|--------|----------------|--------|
| 'be' | ol | olacak |
| /ol/ | /ola> Ãak/ | |
| 'die' | öl | ölecek |
| /œl/ | /œlE> ÃEk/ | |
els) instead of aligning vowels explicitly by their backness feature. In general, describing such processes at the grapheme level is often intricate and requires models trained on morphological tasks to put unnecessary effort in learning patterns that are more complicated than their original cause. Because the underlying phonological information is not explicitly shown to them, instead of learning simple rules of phonological features, they memorize groups of characters that pattern together for no observable reason.
A model that is aware of phonological features would be able to easily learn these relations and treat morpho-phonological processes straightforwardly. In order to construct such a model there is a need for phonologically annotated data or for a tool that converts words to their corresponding sequences of phonemes (their verbal pronunciation) and decomposes the phonemes into their phonological distinctive features. A simple option would be to employ a component that performs grapheme-tophoneme (G2P) and phoneme-to-grapheme (P2G)
conversions for every language, as well as decomposes the phonemes to their corresponding distinctive features. Thus, every character-level model would be able to process phonological data. In the next section we present two ways to incorporate such signals into the data and models for morphological tasks.
## 3 Modeling Reinflection With Phonology
We set out to re-model morphological tasks by integrating phonological information, in order to make phonological processes explicitly learnable for models. We propose two generic methods that are applicable to any morphological model.
Formally, we denote 3 alphabets, for graphemes Σg, phonemes Σp and phonological features Σf .
The first one is language-dependent while the others are universally defined in the IPA list of symbols and features (Association, 1999).3 We treat a word as a tuple of its composing graphemes g ∈ Σ
+ g
.
Correspondingly, the sequence of phonemes that is the result of applying the G2P component to g is denoted by p ∈ Σ
+
p
, and the phonemes' decomposition to a sequence of features is denoted by f ∈ Σ
+
f
.
Suppose we have a morphological task T, in which the input is gsrc and the output ground truth is gtrg. That is
$$\mathbf{g}_{t r g}=T\left(\mathbf{g}_{s r c};S\right)$$
where S is a set of bundles of morphological features that complement the input form. In standard inflection tasks, for example, gsrc is the lemma and gtrg is the inflected output form, where S
is the feature bundle to inflect the lemma to. In reinflection, the forms gsrc and gtrg are the input and output forms, and S is the feature bundles of the source and target word forms, e.g.
{(FUT,2,SG),(FUT,3,PL)}.
We denote MT as a model that solves T, i.e. it takes gsrc and S, and generates ˆgtrg, a prediction of the target word:
$${\hat{\mathbf{g}}}_{t r g}=M_{T}\left(\mathbf{g}_{s r c};S\right)$$
In order to incorporate the phonological information to MT , its inputs should obviously be changed to include this information - either phonemes or phonological features. However, changes can also be done to MT itself to treat better the new inputs. We thus propose two methods for inducing phonological information to morphological models: one manipulates only the source and target data to include phonological features, and one adds a learnable layer to the model in order to facilitate better processing of the new input. Both methods leave S untouched, the model processes S in the exact same way as in the graphemic setting.
Data Manipulation In the first method, we propose to train MT on the *phonological features* of the source and target words, fsrc and ftrg, instead of their letters. We do not modify MT or the way it 3The IPA features we use here may be better described as coarse phonetic features rather than purely phonological, since in some rear language-specific cases there is a mismatch between the phonological behavior of a phoneme and its phonetic properties. However, the scarcity of these cases led to the general usage of IPA features as phonological descriptors and made most linguists consider phonetics and phonology as a unified grammatical mechanism (e.g., Ohala, 1990; Pierrehumbert, 1990).
processes S, the model simply operates directly on the modified representations.
$${\hat{\mathbf{f}}}_{t r g}=M_{T}\left(\mathbf{f}_{s r c};S\right)$$
The network is then optimized with a given loss function ℓ by comparing between the predicted features and the gold target word converted to features:
$${\mathcal{L}}=\mathbb{E}\left[\ell\left({\hat{\mathbf{f}}}_{t r g},\mathbf{f}_{t r g}\right)\right]$$
A clear disadvantage of this method is that the resulting sequences are much longer than the original ones, in practice approximately 3-4 times longer.
Model Manipulation In the second method, we also manipulate the model in accordance with the new data format. We let the model learn a phonemic representation in a way that is aware of the phoneme's phonological features. To this end, we add a self-attention layer (Vaswani et al., 2017) between the embedding matrices to the rest of the network. This layer takes the embeddings of a phoneme E [psrc] and its features E [fsrc], and learns a single vector per phoneme pesrc. The network is then trained to predict the phonemes of the target word:
$$\begin{array}{l}{{\hat{\mathbf{p}}_{t r g}=M_{T}\left(\tilde{\mathbf{p}}_{s r c};S\right)}}\\ {{\tilde{\mathbf{p}}_{s r c}=\mathrm{SelfAttention}\left(q,K,V\right)}}\\ {{K,V=E\left[\mathbf{f}_{s r c}\right]}}\\ {{\quad q=E\left[\mathbf{p}_{s r c}\right],}}\end{array}$$
where the self-attention is computed as follows
(where d is the output dimension and n is the number of heads):
$${\widetilde{\mathbf{p}}}_{s r c}=\operatorname{softmax}\left({\frac{q K^{T}}{\sqrt{d/n}}}\right)\odot V$$
The model is optimized similarly to the first method, except the compared sequences are the predicted phonemes and the gold target word converted to phonemes:
$${\mathcal{L}}=\mathbb{E}\left[\ell\left({\hat{\mathbf{p}}}_{t r g},\mathbf{p}_{t r g}\right)\right]$$
The advantage of this method over the previous one is that the input to the inner network is of the order of magnitude of the number of phonemes, and not the number of features. This leads to more reasonable lengths of the inputs, but it relies more heavily on the model to learn to combine feature representations correctly.
## 4 Experiments
Models We applied the described methods to two character-level models.4 Both were modified to solve reinflection instead of inflection and to handle phonemic symbols and phonological features:
- *LSTM*: a standard LSTM Encoder-Decoder model with global attention5as proposed in Silfverberg and Hulden (2018).
- *Transduce*: An LSTM-based model by Makarov and Clematide (2018) predicting edit actions between the source and the target. This model is more robust in low-resource settings.
Data We experimented with eight languages:
Swahili, Georgian, Albanian, Bulgarian, Latvian, Hungarian, Finnish and Turkish, in three part-ofspeech types. All of these languages have shallow orthography, i.e., nearly one-to-one G2P and P2G mappings. We purposefully selected such languages to be able to disentangle the effects of convoluted orthographies from the potential benefits of phonetic decomposition to features, and to avoid the use of trainable G2P and P2G models that would inevitably propagate errors and serve as a confounding factor. We compared the two proposed methods to the baseline where the models take letters as the source and target tokens.
We randomly sampled 10,000 reinflection samples from the UniMorph 3.0 repository (McCarthy et al., 2020) for train, validation and test sets, with 80%-10%-10% splits ratios. The split was done such that the sets would have no overlapping lemmas, following Goldman et al. (2022). The models were trained separately for each language and POS.
Preprocessing Due to the orthographic shallowness of the selected languages we were able to implement for each language a rule-based component for G2P and P2G conversions.
![3_image_0.png](3_image_0.png)
Table 2: Graphemic Accuracy of all systems, averaged on all language-POS datasets, and averaged over 3 seeds.
Highest value per row is **bold**.
## 5 Results And Analysis
Table 2 shows the results of the two systems across the two methods, compared to the graphemic baseline, averaged over languages. The *LSTM* model performs poorly, with 46 accuracy points at the baseline, and less than 30 points in the novel methods. The *Transduce* model performed much better in general, with more than 80 points in all 3 settings. On average over the 15 language-POS combinations, training on our methods resulted in a slight drop of 2.8 points, which makes them comparable with the baseline. These results may imply that our methods fit better to stronger models, and that this setting and quantities may be considered as lowresource, at least without hallucination methods like that of Anastasopoulos and Neubig (2019).
Table 3 breaks down the results of the *Transduce* model per language. In 7 out of 15 datasets, at least one of our methods outperformed the baseline. The difference varies from 0.9 up to 11.7 accuracy points. All in all, it seems that there is no connection between the relative success of the phonologically-aware methods and the abundance of morpho-phonological processes in a language.
In Turkish, for instance, that has vowel harmony and additional phonological processes, the baseline performed much better, while in Swahili and Georgian (which barely exhibit such processes) there were clear improvements.
To provide insights into the sufficiency of the data and the richness of the signal, we plot on figure 1 (in appendix A) learning curves for the *Transduce* model per language. We trained each model over an increasing number of train samples from 1,000 to 8,000 and evaluated them on the same test sets for each language. The general trends show that the amount of data is indeed sufficient for the model and the signal is not richer, as in most cases the test accuracy with 8,000 samples is similar to the one with 3,000 samples. Moreover, the graphs show that our methods have no clear advantage over the baseline even in as few as 1,000 training examples.
| Language POS | | Method | | | |
|---------------------------|----------------|------------|------------|------------|------------|
| Baseline | Data | Model | | | |
| Manipulation Manipulation | | | | | |
| Bulgarian | Adj 96.6%±0.4% | 95.5%±1.2% | 95.7%±2.4% | | |
| Bulgarian | V | 89.0%±1.1% | 87.6%±1.0% | 88.0%±1.5% | |
| Finnish | Adj 94.2%±0.5% | 92.8%±0.2% | 92.8%±0.1% | | |
| Finnish | N | | 82.3%±0.8% | 83.1%±0.9% | 78.2%±0.9% |
| Finnish | V | 88.1%±2.1% | 79.8%±2.8% | 84.3%±1.0% | |
| Hungarian V | 90.9%±1.1% | 89.6%±0.5% | 89.7%±0.8% | | |
| Georgian | N | | 90.2%±0.5% | 91.4%±0.8% | 90.3%±0.6% |
| Georgian | V | | 42.2%±2.0% | 28.4%±1.5% | 44.2%±4.1% |
| Latvian | N | | 88.4%±0.8% | 90.0%±0.6% | 85.6%±0.5% |
| Latvian | V | 76.5%±0.9% | 70.9%±0.9% | 67.9%±1.9% | |
| Albanian | V | | 84.3%±1.0% | 79.6%±1.4% | 86.9%±2.2% |
| Swahili | Adj | 66.7%±2.9% | 74.4%±4.5% | | |
| 64.4%±12.6% | | | | | |
| Swahili | V | | 90.9%±1.0% | 87.0%±2.1% | 92.4%±1.2% |
| Turkish | Adj 91.6%±2.1% | 79.0%±4.3% | 76.8%±2.3% | | |
| Turkish | V | 82.5%±0.5% | 75.8%±2.1% | 74.9%±0.9% | |
| Average | 83.6%±0.2% | 80.3%±0.2% | 80.8%±0.9% | | |
## 6 Discussion And Conclusion
In this work we incorporated phonological information into morphological tasks. We proposed two methods: one that modifies the data, and one that also manipulates the model. We exemplified them on reinflection for two models and found out that, on average, our methods are comparable with the baseline and do not surpass it. We conclude that the embeddings obtained for the graphemic representations in such tasks may already encode the underlying phonological information in the data.
This conclusion is in line with the work of Wiemerslage et al. (2018), who similarly aimed, with no success, to use phonological data in morphological inflection. Unlike our work, they used a weaker inflection model as a baseline for modification and they had a different method in constructing the phonologically-aware embeddings. More crucially, they experimented with a *form-split* setting, which means that there was significant overlap between the sampled lemmas in the train-test split. Our results also corroborate the findings of Silfverberg et al. (2018), who examined phoneme embeddings from various sources, including from a morphological inflection model, and showed that they implicitly encode phonological features, thus supporting our main conclusion.
## Limitations
One limitation of our work is the experimentation only with languages with shallow orthographies, i.e. relatively simple G2P and P2G mappings. The results might vary for deeper-orthographies languages.
Although we took extra care to verify our conversions are correct and complete, and designed the rules to be as comprehensive as possible, automatic rule-based processes in languages may not be 100%
perfect and some corner cases may introduce errors.
These errors may propagate to affect the numerical results. To mitigate this issue, when ambiguities in determining a target phoneme (or grapheme) in a given language occur, we purposefully select the values that occur more frequently in the UniMorph data of that particular language.
## Acknowledgements
This research is funded by a grant from the European Research Council, ERC-StG grant number 677352, and a grant by the Israeli Ministry of Science and Technology (MOST), grant number 317992, for which we are grateful.
## References
Antonios Anastasopoulos and Graham Neubig. 2019.
Pushing the limits of low-resource morphological inflection. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 984–996, Hong Kong, China. Association for Computational Linguistics.
International Phonetic Association. 1999. *Handbook* of the International Phonetic Association: A guide to the use of the International Phonetic Alphabet.
Cambridge University Press.
Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Arya D.
McCarthy, Katharina Kann, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL–
SIGMORPHON 2018 shared task: Universal morphological reinflection. In Proceedings of the CoNLL–
SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection, pages 1–27, Brussels. Association for Computational Linguistics.
Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLLSIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In *Proceedings of the CoNLL SIGMORPHON 2017 Shared Task:*
Universal Morphological Reinflection, pages 1–30, Vancouver. Association for Computational Linguistics.
Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden.
2016. The SIGMORPHON 2016 shared Task— Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10–22, Berlin, Germany. Association for Computational Linguistics.
Omer Goldman, David Guriel, and Reut Tsarfaty. 2022.
(un)solving morphological inflection: Lemma overlap artificially inflates models' performance. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short* Papers), pages 864–870, Dublin, Ireland. Association for Computational Linguistics.
Peter Makarov and Simon Clematide. 2018. Neural transition-based string transduction for limitedresource setting in morphology. In Proceedings of the 27th International Conference on Computational Linguistics, pages 83–93, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Robert Malouf. 2017. Abstractive morphological learning with a recurrent neural network. *Morphology*,
27(4):431–458.
Arya D. McCarthy, Christo Kirov, Matteo Grella, Amrit Nidhi, Patrick Xia, Kyle Gorman, Ekaterina Vylomova, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, Timofey Arkhangelskiy, Nataly Krizhanovsky, Andrew Krizhanovsky, Elena Klyachko, Alexey Sorokin, John Mansfield, Valts Ernštreits, Yuval Pinter, Cassandra L. Jacobs, Ryan Cotterell, Mans Hulden, and David Yarowsky. 2020.
UniMorph 3.0: Universal Morphology. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 3922–3931, Marseille, France.
European Language Resources Association.
Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett Nicolai, Christo Kirov, Miikka Silfverberg, Sabrina J.
Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Morphological analysis in context and crosslingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229–244, Florence, Italy. Association for Computational Linguistics.
John J Ohala. 1990. There is no interface between phonology and phonetics: a personal view. *Journal* of phonetics, 18(2):153–171.
Janet Pierrehumbert. 1990. Phonological and phonetic representation. *Journal of phonetics*, 18(3):375–394.
Tiago Pimentel, Maria Ryskina, Sabrina J. Mielke, Shijie Wu, Eleanor Chodroff, Brian Leonard, Garrett Nicolai, Yustinus Ghanggo Ate, Salam Khalifa, Nizar Habash, Charbel El-Khaissi, Omer Goldman, Michael Gasser, William Lane, Matt Coler, Arturo Oncevay, Jaime Rafael Montoya Samame, Gema Celeste Silva Villegas, Adam Ek, Jean-Philippe
Bernardy, Andrey Shcherbakov, Aziyana Bayyr-ool, Karina Sheifer, Sofya Ganieva, Matvey Plugaryov, Elena Klyachko, Ali Salehi, Andrew Krizhanovsky, Natalia Krizhanovsky, Clara Vania, Sardana Ivanova, Aelita Salchak, Christopher Straughn, Zoey Liu, Jonathan North Washington, Duygu Ataman, Witold Kieras, Marcin Woli ´ nski, Totok Suhardijanto, Niklas ´
Stoehr, Zahroh Nuriah, Shyam Ratan, Francis M.
Tyers, Edoardo M. Ponti, Grant Aiton, Richard J.
Hatcher, Emily Prud'hommeaux, Ritesh Kumar, Mans Hulden, Botond Barta, Dorina Lakatos, Gábor Szolnok, Judit Ács, Mohit Raj, David Yarowsky, Ryan Cotterell, Ben Ambridge, and Ekaterina Vylomova. 2021. SIGMORPHON 2021 shared task on morphological reinflection: Generalization across languages. In Proceedings of the 18th SIGMORPHON
Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229–259, Online. Association for Computational Linguistics.
Miikka Silfverberg and Mans Hulden. 2018. An encoder-decoder approach to the paradigm cell filling problem. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 2883–2889, Brussels, Belgium. Association for Computational Linguistics.
Miikka P. Silfverberg, Lingshuang Mao, and Mans Hulden. 2018. Sound analogies with phoneme embeddings. In *Proceedings of the Society for Computation in Linguistics (SCiL) 2018*, pages 136–144.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Maria Ponti, Rowan Hall Maudslay, Ran Zmigrod, Josef Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrew Krizhanovsky, Tiago Pimentel, Lucas Torroba Hennigen, Christo Kirov, Garrett Nicolai, Adina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfverberg, and Mans Hulden. 2020. SIGMORPHON 2020 shared task 0: Typologically diverse morphological inflection. In *Proceedings of the 17th SIGMORPHON*
Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 1–39, Online.
Association for Computational Linguistics.
Adam Wiemerslage, Miikka Silfverberg, and Mans Hulden. 2018. Phonological features for morphological inflection. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 161–166, Brussels, Belgium. Association for Computational Linguistics.
Shijie Wu, Ryan Cotterell, and Mans Hulden. 2021. Applying the transformer to character-level transduction.
In Proceedings of the 16th Conference of the Euro-
pean Chapter of the Association for Computational
Linguistics: Main Volume , pages 1901–1907, Online.
![7_image_0.png](7_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Described in the Limitations section A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4
✓ B1. Did you cite the creators of artifacts you used?
4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4
## C ✓ **Did You Run Computational Experiments?** 4
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wu-etal-2023-holistic | A Holistic Approach to Reference-Free Evaluation of Machine Translation | https://aclanthology.org/2023.acl-short.55 | Traditional machine translation evaluation relies on reference written by humans. While reference-free evaluation gets rid of the constraints of labor-intensive annotations, which can pivot easily to new domains and is more scalable. In this paper, we propose a reference-free evaluation approach that characterizes evaluation as two aspects: (1) fluency: how well the translated text conforms to normal human language usage; (2) faithfulness: how well the translated text reflects the source data. We further split the faithfulness into word-level and sentence-level. Extensive experiments spanning WMT18/19/21 Metrics segment-level daRR and MQM datasets demonstrate that our proposed reference-free approach, ReFreeEval, outperforms SOTA reference-fee metrics like YiSi-2. | # A Holistic Approach To Reference-Free Evaluation Of Machine Translation
Hanming Wu1˚**, Wenjuan Han**1˚
, Hui Di2, Yufeng Chen1**, Jinan Xu**1:
1 Beijing Jiaotong University, Beijing, China 2 Toshiba (China) Co., Ltd., Beijing, China [email protected], [email protected] [email protected], [email protected], [email protected]
## Abstract
Traditional machine translation evaluation relies on references written by humans. While reference-free evaluation gets rid of the constraints of labor-intensive annotations, it can pivot easily to new domains and is more scalable. In this paper, we propose a referencefree evaluation approach that characterizes evaluation as two aspects: (1) fluency: how well the candidate translation conforms to normal human language usage; (2) faithfulness:
how well the candidate translation reflects the source data. We further split the faithfulness into word-level and sentence-level. Extensive experiments spanning WMT18/19/21 Metrics segment-level daRR and MQM datasets demonstrate that our proposed reference-free approach, ReFreeEval, outperforms SOTA
reference-free metrics like YiSi-2, SentSim and BERTScore-MKD in most language directions.
The code can be found at ReFreeEval Repo1.
## 1 Introduction
Machine translation evaluation has conventionally relied on reference, where outputs are compared against translations written by humans. This is in contrast to the reference-free manner in which translation quality is directly assessed with the source text. Reference-free evaluation (Napoles et al., 2016; Thompson and Post, 2020; Agrawal et al., 2021) has the potential to free the evaluation model from the constraints of labor-intensive annotations, allowing it to pivot easily to new domains.
In this way, reference-free evaluation metrics are substantially more scalable and have lately been in the spotlight.
The history of reference-free evaluation for MT
can trace back to "QE as a Metric" track of
˚ Equal contribution. : Corresponding author.
1https://github.com/cocacola-lab/
Reference-Free-Evaluation-of-Machine-Translation.
git WMT2019 Metrics Task (Ma et al., 2019). YiSi2 (Lo, 2019) and XBERTScore (Zhang* et al.,
2020; Leiter, 2021) are embedding-based methods that adopt contextual word embeddings to calculate the lexical similarity between the source and candidate translation words. Quality estimation (Fonseca et al., 2019) system metrics such as UNI+ (Yankovskaya et al., 2019) and COMETQE (Rei et al., 2020a, 2021) also leverage contextual word embeddings and feed them into a feedforward network. However, they are trained to regress on human scores that are expensive to collect, and gross discrepancies exist when different humans are asked to label the scores.
More challenging but worthwhile, we focus on dispensing with references as well as human scores.
Nevertheless, embedding-based methods are limited to token-level semantic similarity while neglecting sentence-level faithfulness (Song et al.,
2021). Besides, it's difficult for word embeddings to discriminate matched word pairs from random ones (Zhao et al., 2020a).
In addition, current reference-free evaluation methods rarely take fluency into account. For the unfluent candidates whose content is roughly consistent with the source, the embedding-based metrics can hardly discriminate and provide accurate evaluation scores2. Moreover, the general goal of evaluation metrics is to estimate not only the semantic equivalence between source and candidate but also the general quality (i.e., fluency and naturalness) (Banchs et al., 2015; Feng et al., 2020; Yuan et al., 2021).
In this work, we propose a holistic approach (i.e.,
ReFreeEval) to enhance the evaluation model in aspects of fluency and faithfulness, meanwhile on both word and sentence levels. With regard to fluency, we pose a data augmentation method and train a fluency discrimination module. For word-level faithfulness, we adopt a self-guided 2We provide more details and case studies in Appendix B.
623 contrastive word-alignment method. For sentencelevel faithfulness, we execute knowledge distillation with SBERT (Reimers and Gurevych, 2019) to capture more fine-grained semantics. Our method builds on the framework of XBERTScore. Extensive experiments spanning WMT18/19/21 Metrics (Ma et al., 2018, 2019; Freitag et al., 2021)
segment-level daRR and MQM datasets demonstrate that our proposed reference-free approach, ReFreeEval, outperforms SOTA reference-free metrics like YiSi-2, SentSim and BERTScoreMKD in most language directions.
## 2 Approach
Reference-free evaluation of MT can be characterized as two aspects: (1) fluency: how well it conforms to normal human language usage; and (2)
faithfulness: how well the translated text reflects the source data. We assess faithfulness at different granularity: word level and sentence level. Figure 1 is the illustration of our ReFreeEval method.
## 2.1 Sentence-Level Fluency
We explore a data augmentation method to perturb the fluency of target sentences with noise which is difficult to be identified. Then we train a fluency discrimination module with contrastive learning (Gao et al., 2021; Zhang et al., 2021; Wu et al.,
2022; Wang et al., 2022) to distinguish fluent samples from perturbed samples (namely, challenging negative samples).
## Data Augmentation Using Clause Permutation
A complex or compound sentence3 has two or more clauses and relative clauses that are joined together with conjunctions or punctuation. As logical relations exist between these clauses, we manipulate and permute the clauses separated by punctuation, instead of words. In this way, the meaning is preserved inside the clauses, meanwhile, the sentence is often unfluent and unnatural. Similar to complex and compound sentences, for a simple sentence with only one clause4, we randomly split it into two fragments and permute the two fragments. Compared to permutation on the token level, clauselevel permutation has less influence on sentence fluency and semantic change. The clause-based permutation method brings perturbed samples that are more challenging and hard to be recognized.
Fluency Discrimination We denote a source and target sentence in parallel data as x and y. Perturbed samples augmented from y are yˆ1, yˆ2*, ...,* yˆk.
A reliable metric has the ability to give the original fluent target y a higher evaluation score than those k perturbed unfluent samples.
As for the score, we adopt the same calculation measure as BERTScore but replace the pre-trained monolingual model (Devlin et al., 2019; Liu et al.,
2019) with a cross-lingual model (Devlin et al., 2019; Conneau et al., 2019) to do reference-free evaluation (Zhou et al., 2020; Song et al., 2021) denominated as XBERTScore (Leiter, 2021). We use 9th layer of XLM-Roberta-Base to extract contextual word embeddings. Here we only use F*BERT*
as evaluation score between source x and targetside y or yˆi, which is represented as swp*x, y*q or swpx, yˆiq. Then we can obtain word-level faithfulness scores swpx, yq, swpx, yˆ1q*, ..., s*wpx, yˆkq of pk ` 1q pairs.
In order to discriminate fluent sentences from perturbed ones according to these scores, we treat the original target and its corresponding perturbed samples as opposite and assign them 1/0 hard labels. The cross-lingual model which produces XBERTScore is trained to classify target-side sentences with a cross-entropy loss function. The objective function on N training samples is as follows:
$$L_{f l}=-\frac{1}{N}\sum_{x,y}\log\frac{e^{s_{w}(x,y)}}{e^{s_{w}(x,y)}+\sum_{i=1}^{k}e^{s_{w}(x,{\hat{y}}_{i})}}\,\,\,\,(1)$$
## 2.2 Word-Level Faithfulness
As for word-level faithfulness, each word in the source sentence should have a corresponding crosslingual representation in the target sentence and each word in the target sentence should be an accurate translation of its source word. This motivates us to do word-alignment training to enhance wordlevel evaluation.
This module shares similar architecture with sentence-level fluency where word embeddings are derived from 9th layer of XLM-Roberta-Base.
We take the same steps as (Dou and Neubig, 2021) to extract alignments. First, we compute the dot product between source and target word embeddings to obtain the similarity matrix S. Then S is normalized in source and target dimensions. And
![2_image_0.png](2_image_0.png)
we get source-to-target alignment matrix Sxy and target-to-source alignment matrix Syx. A source/-
target token and a target/source token whose similarity value in alignment matrix Sxy/Syx exceed threshold c1 are regarded as aligned. The bidirectional alignment matrix A is deduced:
$$A=(S_{x y}>c_{1})*(S_{y x}^{T}>c_{1})\qquad\qquad(2)$$
Aij " 1 means xi and yj are aligned. Dou and Neubig (2021) also propose the self-training objective to align words with this bidirectional alignment, which improves alignment performance most.
Based on this objective, we adopt a self-guided contrastive cross-lingual word-alignment method.
By contrast, we not only pull semantic aligned words to have closer contextual representations but also push unrelated words away (Luo et al., 2021; Su et al., 2022; Meng et al., 2022), which encourages the model to discriminate matched word embeddings from semantically unrelated ones.
The source token and target token are deemed to be unrelated if their similarity value is low. In our method, these unmatched pairs constitute negative samples and are pushed away. Moreover, we set threshold c2 to further restrict the negative samples. The unmatched pairs whose similarity value is lower than c2 are discarded from negatives as this unmatched relation can be easily distinguished by the model. In this way, we can control the difficulty of negative samples and only preserve those indistinguishable ones (hard negatives) to train the model.
$$B=(S_{x y}>c_{2})*(S_{y x}^{T}>c_{2})$$
Bij " 1 means xi and yj are aligned or a part of hard negatives, which are preserved to train.
In Figure 1, the dark blue positions mean bidirectional alignment while the light blue positions are hard negative examples.
Finally, based on two dimensions of source and target, the positive and negative samples mentioned above, we construct a self-guided contrastive learning objective function on the word level as follows:
$$L_{x}=-\frac{1}{m}\sum_{i=1}^{m}\frac{\sum_{j=1}^{n}\mathbb{1}\left(A_{i j}=1\right)e^{S_{x y_{i j}}}}{\sum_{j=1}^{n}\mathbb{1}\left(B_{i j}=1\right)e^{S_{x y_{i j}}}}$$
$${\mathrm{(4)}}$$
i"1 řm j"1 1pAT ij " 1qe S T yxij Ly " ´ 1 n ÿn řm j"1 1pBT ij " 1qe ST yxij Lword " Lx ` Ly (6)
(5) $\binom{6}{2}$ .
## 2.3 Sentence-Level Faithfulness
The main idea is to improve sentence-level faithfulness evaluation. Concretely, we distill sentencelevel semantic meaning from SBERT into the wordlevel shared model.
We use SBERT to extract semantically meaningful sentence embeddings. Sentence semantic similarity between x and y is calculated with cosinesimilarity between sentence embeddings x and y:
$$s_{s}(x,y)={\frac{x\cdot y}{\|x\|\|y\|}}$$
$$\quad(7)$$
The semantic similarity reflects the sentencelevel faithfulness from target to source. Then we can obtain sentence-level faithfulness scores sspx, yq, sspx, yˆ1q*, ..., s*spx, yˆkq. We use KLdivergence as the objective function to reduce the
$$(3)$$
discrepancy between sentence-level and word-level similarity:
$$L_{f a}=\sum_{x,y^{\prime}\in Y_{x}}s_{s}(x,y^{\prime})\log{\frac{s_{s}(x,y^{\prime})}{s_{w}(x,y^{\prime})}}\quad\quad(8)$$
In this distillation module, SBERT plays a role of a teacher. Sentence-level semantic knowledge is distilled into the word-level shared model through these sentence-level faithfulness scores. In this way, evaluation is no longer limited to word level but incorporated sentence semantics.
On the other hand, SBERT plays a role as a corrector. It is unreasonable that a disturbed sample with slightly changed semantics is considered to be completely contrary to the original sentence. We correct the binary classification and convert the 0/1 discrete value in the fluency discrimination module to continuous variables.
For sentence-level training, we combine fluency with faithfulness. This joint architecture is motivated by (Ren et al., 2021). The objective is:
$$L_{s e n t}=L_{f l}+\alpha L_{f a}$$
Lsent " Lf l ` αLfa (9)
α is a hyper-parameter to control the weight that the sentence-level faithfulness module accounts for.
## 3 Experiment 3.1 Setup
Datasets We train and evaluate on four language pairs: EnglishØChinese and EnglishØGerman.
For training, we use the datasets following Awesome-Align (Dou and Neubig, 2021). The En-Zh training dataset is collected from the TsinghuaAligner5 website and En-De training data is Europarl v7 corpus. For evaluation, we use the segment-level daRR dataset of WMT18/19 and MQM dataset of WMT21 Metrics Task. Details about datasets are introduced in Appendix C.1.
Embeddings We use the 9th layer of XLMRoberta-Base to extract contextual word embeddings. This follows the default setting of BERTScore6. For sentence embeddings, we adopt xlm-r-bert-base-nli-stsb-mean-tokens model7the same as SentSim.
5http://nlp.csai.tsinghua.edu.cn/~ly/systems/
TsinghuaAligner/TsinghuaAligner.html 6https://github.com/Tiiiger/bert_score 7https://huggingface.co/sentence-transformers/
xlm-r-bert-base-nli-stsb-mean-tokens Baselines For reference-based metrics, we choose sentBLEU (Papineni et al., 2002) and YiSi1 (Lo, 2019). For reference-free metrics, we choose XBERTScore (Leiter, 2021) , YiSi-2 (Lo, 2019), SentSim (Song et al., 2021) and BERTScoreMKD (Zhang et al., 2022). Most results of baseline models are reported in the original paper (Ma et al., 2018, 2019; Freitag et al., 2021; Zhang et al.,
2022). We also implement experiments that have not been reported, such as XBERTScore, SentSim and BERTScore-MKD.
Training Process For ReFreeEval, sentencelevel module is first trained. Then word-level faithfulness module is trained based on the best checkpoint of sentence-level training. Training details are in Appendix C.3.
Evaluation Measures For WMT18/19 segmentlevel evaluation, Kendall's Tau-like formulation is used to measure the scores against daRR.
$$\tau=\frac{|Concordant|-|Discordant|}{|Concordant|+|Discordant|}\tag{10}$$
$$(9)$$
For WMT21 segment-level evaluation, conventional Kendall-tau statistic is used to measure the correlation between our scores and MQM scores.
## 3.2 Results
The main results are displayed in Table 1, 2, 3. First, we observe that fluency, word-level faithfulness, and sentence-level faithfulness module improve the evaluation performance respectively. We also find that the main improvement comes from sentencelevel fluency indicating that XBERTScore as a token-level evaluation metric lacks sentence-level knowledge. Then, the ensemble model combining the advantages of the three modules achieves even better results. And compared with some referencebased baselines it achieves comparable results or even outperforms them. More details of experimental results are in Appendix C.4.
## 4 Conclusion
We propose a reference-free evaluation approach ReFreeEval that comprehensively considers three aspects: fluency, word-level faithfulness, and sentence-level faithfulness. Extensive experiments spanning datasets from WMT18/19/21 demonstrate the superiority of each module designed for each 8Reported by Zhang et al. (2022). Our implementations here are 0.1997 in Zh-En and 0.0623 in De-En.
Model Zh-En En-Zh De-En En-De
Reference-based
SentBLEU 0.178 0.311 0.415 0.620
YiSi-1 0.211 0.323 0.488 0.691
Reference-free
XBERTScore 0.0831 0.1129 0.3313 0.3143 SentSim 0.1213 0.1436 0.4127 0.4315 YiSi-2 0.091 0.101 0.279 0.359
BERTScoreMKD 0.1012 0.1102 0.4082 0.4329
word-level 0.0948 0.1355 0.3337 0.3413
fluency 0.1371 0.2503 0.3733 0.4751
sent-fa 0.1169 0.1759 0.3529 0.4319
sent-level 0.1798 0.2749 0.4144 0.5817
ReFreeEval **0.1813 0.2920 0.4154 0.5884**
Table 1: Segment-level metric results for WMT18: absolute Kendall's Tau formulation on different evaluation
metrics.
aspect. ReFreeEval, combining the above three modules, achieves a higher correlation with human judgments, outperforming current SOTA referencefree metrics like YiSi-2, SentSim and BERTScoreMKD in most language directions.
## Limitations
In this section, we discuss some limitations of our method and future work based on the limitations.
First, the enhancement of the word-level module is not as strong as the remedy of the sentence-level module. Our word-level module solely achieves improvement compared with XBERTScore but doesn't improve as much as the sentence-level module. The main reason is that the XBERTScore framework lacks sentence-level semantic knowledge. Besides, our word-level self-guided contrastive method doesn't resort to external information and only consolidates the alignment already existing in the pre-trained language model. Second, ReFreeEval performs comparably with baseline models on language pairs involving German. We guess it is due to the evaluation of QE. Ma et al.
(2019) mention that the evaluation results across all language pairs are unstable in "QE as a Metric" track and can't explain yet.
In the future, we'll further explore valuable external information on word level. And we'll try to explore discrepancies among language pairs to optimize the results. In addition, our simple but effective data augmentation method - clause per-
Model Zh-En En-Zh De-En En-De
Reference-based
SentBLEU 0.323 0.270 0.056 0.248
YiSi-1 0.426 0.355 0.164 0.351
Reference-free
XBERTScore 0.1482 0.0347 0.0488 0.1803 SentSim 0.2213 0.0771 0.0629 0.2334 YiSi-2 0.253 0.044 0.068 0.212
BERTScoreMKD 0.2088 0.0805 0.0938 0.2636
word-level 0.1864 0.0382 0.0517 0.1894
fluency 0.2435 0.1679 0.0682 0.2537
sent-fa 0.2346 0.0941 0.0497 0.2257
sent-level 0.3032 0.2387 **0.0807 0.3013** ReFreeEval **0.3173 0.2508** 0.0739 0.2995
Model **Zh-En En-De Zh-En En-De**
w/o HT w/o HT w/ HT w/ HT
Reference-based
SentBLEU 0.176 0.083 0.165 0.064
YiSi-1 0.302 0.172 0.289 0.145
Reference-free
XBERTScore 0.2457 0.0367 0.2395 0.0176
SentSim 0.1938 0.0455 0.1867 0.0234
YiSi-2 **0.270** 0.098 **0.263** 0.071
BERTScore-MKD 0.2227 0.0503 0.2137 0.0290
word-level 0.2489 0.0388 0.2425 0.0196
fluency 0.2450 0.0482 0.2382 0.0281
sent-fa 0.2429 0.0448 0.2359 0.0238
sent-level 0.2601 0.0988 0.2520 0.0819
ReFreeEval 0.2628 **0.1008** 0.2543 **0.0828**
Table 3: Segment-level Kendall-Tau correlations for
WMT21 MQM data.
mutation doesn't rely on rules or toolkits, which is an initial attempt at modeling fluency. It could benefit from further refinement such as languagespecific knowledge, syntactic and semantic parsing to recognize clauses. We'll conduct an in-depth investigation into further work.
## Acknowledgements
The research work described in this paper has been supported by the National Key R&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130). Wenjuan Han is supported by the Talent Fund of Beijing Jiaotong University (2023XKRC006). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper. We would like to express our sincere gratitude to Hui Huang for guidance before this research. We are also grateful to Chunyou Li, Yu Xiang and Yu Zhang for their assistance during internship.
## References
Sweta Agrawal, George Foster, Markus Freitag, and Colin Cherry. 2021. Assessing reference-free peer evaluation for machine translation. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1158–1171, Online. Association for Computational Linguistics.
Rafael E. Banchs, Luis F. D'Haro, and Haizhou Li. 2015.
Adequacy–fluency metrics: Evaluating mt in the continuous space model framework. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*,
23(3):472–482.
Julian Chow, Lucia Specia, and Pranava Madhyastha.
2019. WMDO: Fluency-based word mover's distance for machine translation evaluation. In *Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)*, pages 494–500, Florence, Italy. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. *CoRR*,
abs/1911.02116.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora.
CoRR, abs/2101.08231.
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing.
In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392. Association for Computational Linguistics.
Yang Feng, Wanying Xie, Shuhao Gu, Chenze Shao, Wen Zhang, Zhengxin Yang, and Dong Yu. 2020.
Modeling fluency and faithfulness for diverse neural machine translation. *Proceedings of the AAAI*
Conference on Artificial Intelligence, 34(01):59–66.
Erick Fonseca, Lisa Yankovskaya, André F. T. Martins, Mark Fishel, and Christian Federmann. 2019.
Findings of the WMT 2019 shared tasks on quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1–10, Florence, Italy. Association for Computational Linguistics.
Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondˇrej Bojar. 2021. Results of the WMT21 metrics shared task: Evaluating metrics with expert-based human evaluations on TED and news domain. In *Proceedings of the Sixth Conference on Machine Translation*, pages 733–774, Online. Association for Computational Linguistics.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017.
Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation.
In *Proceedings of the Second Conference on Machine* Translation, pages 562–568, Copenhagen, Denmark.
Association for Computational Linguistics.
Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of *Proceedings of Machine Learning Research*, pages 957–
966, Lille, France. PMLR.
Christoph Wolfgang Leiter. 2021. Reference-free wordand sentence-level translation evaluation with tokenmatching metrics. In *Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems*,
pages 157–164, Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Chi-kiu Lo. 2019. YiSi - a unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources. In *Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)*, pages 507–513, Florence, Italy. Association for Computational Linguistics.
Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. *CoRR*,
abs/1711.05101.
Ruikun Luo, Guanhuan Huang, and Xiaojun Quan.
2021. Bi-granularity contrastive learning for posttraining in few-shot scene. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP*
2021, pages 1733–1742, Online. Association for Computational Linguistics.
Qingsong Ma, Ondˇrej Bojar, and Yvette Graham. 2018.
Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671–688, Belgium, Brussels. Association for Computational Linguistics.
Qingsong Ma, Johnny Wei, Ondˇrej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62–90, Florence, Italy. Association for Computational Linguistics.
Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, and Ondˇrej Bojar. 2020. Results of the WMT20 metrics shared task. In *Proceedings of the Fifth Conference on Machine Translation*, pages 688–725, Online. Association for Computational Linguistics.
Zhao Meng, Yihan Dong, Mrinmaya Sachan, and Roger Wattenhofer. 2022. Self-supervised contrastive learning with adversarial perturbations for defending word substitution-based attacks. In *Findings of the Association for Computational Linguistics: NAACL 2022*,
pages 87–101, Seattle, United States. Association for Computational Linguistics.
João Moura, Miguel Vera, Daan van Stigt, Fabio Kepler, and André F. T. Martins. 2020. IST-unbabel participation in the WMT20 quality estimation shared task.
In *Proceedings of the Fifth Conference on Machine* Translation, pages 1029–1036, Online. Association for Computational Linguistics.
Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2016. There's no comparison: Referenceless evaluation metrics in grammatical error correction. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*,
pages 2109–2115.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems, volume 32. Curran Associates, Inc.
Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In *Proceedings of the* Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
Ricardo Rei, Ana C Farinha, Chrysoula Zerva, Daan van Stigt, Craig Stewart, Pedro Ramos, Taisiya Glushkova, André F. T. Martins, and Alon Lavie.
2021. Are references really needed? unbabel-IST
2021 submission for the metrics shared task. In *Proceedings of the Sixth Conference on Machine Translation*, pages 1030–1040, Online. Association for Computational Linguistics.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020a. COMET: A neural framework for MT
evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Ricardo Rei, Craig Stewart, Catarina Farinha, and Alon Lavie. 2020b. Unbabel's participation in the WMT20 metrics shared task. *CoRR*, abs/2010.15535.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
CoRR, abs/1908.10084.
Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, QiaoQiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021. RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 2825–2835, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223–231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas.
Yurun Song, Junchen Zhao, and Lucia Specia. 2021.
SentSim: Crosslingual semantic evaluation of machine translation. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 3143–3156, Online. Association for Computational Linguistics.
Peter Stanchev, Weiyue Wang, and Hermann Ney. 2019.
EED: Extended edit distance measure for machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 514–520, Florence, Italy. Association for Computational Linguistics.
Yixuan Su, Fangyu Liu, Zaiqiao Meng, Tian Lan, Lei Shu, Ehsan Shareghi, and Nigel Collier. 2022. TaCL:
Improving BERT pre-training with token-aware contrastive learning. In *Findings of the Association* for Computational Linguistics: NAACL 2022, pages 2497–2507, Seattle, United States. Association for Computational Linguistics.
Brian Thompson and Matt Post. 2020. Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 90–121, Online.
Association for Computational Linguistics.
Hao Wang, Yangguang Li, Zhen Huang, Yong Dou, Lingpeng Kong, and Jing Shao. 2022. SNCSE:
contrastive learning for unsupervised sentence embedding with soft negative samples. *CoRR*,
abs/2201.05979.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2022. ESimCSE: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 3898–
3907, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Elizaveta Yankovskaya, Andre Tättar, and Mark Fishel.
2019. Quality estimation and translation metrics via pre-trained word and sentence embeddings. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 101–105, Florence, Italy. Association for Computational Linguistics.
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
Bartscore: Evaluating generated text as text generation. In Advances in Neural Information Processing Systems, volume 34, pages 27263–27277. Curran Associates, Inc.
Runzhe Zhan, Xuebo Liu, Derek F. Wong, and Lidia S.
Chao. 2021. Difficulty-aware machine translation
evaluation. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers),
pages 26–32, Online. Association for Computational Linguistics.
Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021. Pairwise supervised contrastive learning of sentence representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5786–5798, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Min Zhang, Hao Yang, Shimin Tao, Yanqing Zhao, Xiaosong Qiao, Yinlu Li, Chang Su, Minghan Wang, Jiaxin Guo, Yilun Liu, and Ying Qin. 2022. Incorporating multilingual knowledge distillation into machine translation evaluation. In *Knowledge Graph* and Semantic Computing: Knowledge Graph Empowers the Digital Economy, pages 148–160, Singapore. Springer Nature Singapore.
Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
Wei Zhao, Steffen Eger, Johannes Bjerva, and Isabelle Augenstein. 2020a. Inducing language-agnostic multilingual representations. *CoRR*, abs/2008.09112.
Wei Zhao, Goran Glavaš, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020b. On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1656–
1671, Online. Association for Computational Linguistics.
Lei Zhou, Liang Ding, and Koichi Takeda. 2020. Zeroshot translation quality estimation with explicit crosslingual patterns. In *Proceedings of the Fifth Conference on Machine Translation*, pages 1068–1074, Online. Association for Computational Linguistics.
## A Related Work A.1 Reference-Based Evaluation For Mt
According to matching features, reference-based evaluation methods can be categorized as follows:
(1) n-gram(e.g. BLEU (Papineni et al., 2002)
and CHRF (Popovic´, 2015)); (2) edit distance(e.g.
TER (Snover et al., 2006) and EED (Stanchev et al., 2019)); (3) word embedding(e.g. YiSi (Lo, 2019) and BERTScore (Zhang* et al., 2020));
(4)predictor-estimator model (Kim et al., 2017)(e.g.
COMET (Rei et al., 2020a)). n-gram matching
| Sentence | DA | XBERTScore | ReFreeEval | |
|-----------------------------------------------|-----------------------------------------------------------------------------------------------|--------------|--------------|--------|
| SRC | 但也有顾客认为,网站退款服务不是百分之百 完美。 | | | |
| REF | Nonetheless, some customers felt that website refund services are not perfect. | | | |
| MT1 | But there are also customers who believe the site refund service is not 100 per cent perfect. | 1.1059 | 0.8993 | 0.9249 |
| MT2 | But also some customers believe that website refunds money the service is not 100% perfect. | -1.5038 | 0.9031 | 0.8680 |
| Table 4: An example of WMT18-Metrics dataset. | | | | |
metrics are restricted to surface form and neglect semantic meaning.
Instead, embedding-based metrics adopt word embedding to explore word-level semantic meaning. WMDo (Chow et al., 2019) builds on Word Mover's Distance (Kusner et al., 2015) to measure the similarity of candidate and reference. It also introduces a word order penalty to take fluency into account. YiSi-1 aggregates the weighted lexical similarity to evaluate translation quality.
BERTScore calculates the token-level semantic similarity between candidate translation tokens and reference tokens. DA-BERTScore (Zhan et al.,
2021) takes translation difficulty into account and assigns difficulty weightings to each token in reference.
COMET leverages contextual word embeddings of the source sentence, MT hypothesis, and reference (or human post-edition) extracted from pretrained cross-lingual models. The embeddings are combined and fed into a feed-forward network. It's a quality estimation system and is trained with human assessments(DA, HTER, MQM).
## A.2 Reference-Free Evaluation For Mt
As reference is costly to be collected in practice, reference-free metrics attract more attention. Recent studies have explored evaluating translation quality only based on the source text.
YiSi-2 calculates similarities between crosslingual word embeddings for aligned source and candidate translation words and outputs an Fmeasure statistic as the metric score. Zhao et al.
(2020b) propose to re-align vector spaces and couple the semantic faithfulness scores with GPTbased fluency testing.
OpenKiWi-XLMR (Moura et al., 2020) and COMET-QE (Rei et al., 2020b) are quality estimation systems from "QE as a Metric" task (Mathur et al., 2020). They remove reference at the input but still require human assessments to train.
As reference-based BERTScore has achieved outstanding performance, many recent referencefree evaluation methods build on BERTScore.
XBERTScore (Leiter, 2021) adopts the crosslingual pre-trained language model to evaluate only based on source sentence without reference.
SentSim (Song et al., 2021) combines semantic sentence similarity with token-level BERTScore.
BERTScore-MKD (Zhang et al., 2022) also uses sentence embeddings to achieve cross-lingual word embedding alignment by multilingual knowledge distillation.
## B Case Study
From Table 4, we can see there is a significant difference between the golden truth DA of MT1 and MT2. And the quality of MT1 is much better than MT2. But XBERTScore evaluates incorrectly and assigns MT1 with a lower score than MT2. Though MT2 is translated word by word which means poor fluency, almost all words in MT2 can be aligned with source. As XBERTScore method is evaluated on word-level matching, it can be easily confused.
The model trained with our holistic approach can make up for this shortage and discriminate the fluency problem.
## C Experimental Details C.1 Data Analysis
Following the data setting of awesome-align (Dou and Neubig, 2021), we use the following parallel corpora to fine-tune our model. The EnglishChinese(En-Zh) dataset is collected from the TsinghuaAligner webset and Englist-German(En-De)
dataset is the Europarl v7 corpus. We only adopt a multilingual setting but use less data. We randomly sample 20k parallel sentence pairs from each dataset and mix them together.
In the word-level faithfulness module, we directly use mixed data to train. In the sentence-level fluency and faithfulness module, as only the target is perturbed, we randomly select 1/3 mixed data and swap the source and target in order to attend to all three languages.
To evaluate our method, we choose segmentlevel evaluation datasets of WMT Metrics Task.
Two types of human assessments are included.
Segment-level Metrics datasets of WMT18/19 use daRR(direct assessment relative ranking) as ground truth and WMT21 use MQM(multidimensional quality metrics) as ground truth.
## C.2 Details Of Sentence-Level Faithfulness
Before applying KL-divergence, the word-level and sentence-level similarity scores are processed as follows.
$$s_{w}(x,y)=\log(\frac{e^{s_{w}}(x,y)}{\sum_{y^{\prime}\in Y_{x}}e^{s_{w}}(x,y^{\prime})})\tag{11}$$ $$s_{s}(x,y)=\frac{e^{s_{s}}(x,y)}{\sum_{y^{\prime}\in Y_{x}}e^{s_{s}}(x,y^{\prime})}\tag{12}$$
## C.3 Training Details
Our model is fine-tuned based on the 9th layer of XLM-Roberta-Base. We implement our model with Pytorch (Paszke et al., 2019), Transformers (Wolf et al., 2020) and BERTScore (Zhang* et al., 2020) package. We use AdamW optimizer (Loshchilov and Hutter, 2017). The model is trained on up to 3 GeForce RTX 2080 Ti GPUs.
For sentence-level training, the hyperparameter settings are displayed in Table 5. We mainly search α P t0, 1, 5, 10, 20, 30, 40, 50, 100, 500u.
The training process is on a single GPU with gradient accumulation. We evaluate the model with classification accuracy every 100 steps and save the checkpoint with the highest accuracy.
For word-level training, the hyperparameter settings are displayed in Table 6. We search batch sizeP t8, 10, 15, 16, 24, 28, 32, 48u, learning rateP
t1e´5, 5e´6, 3e´6, 1e´6, 2e´6, 5e´7, 1e´7u and c2 P t1e ´ 5, 1e ´ 10, 1e ´ 15, 1e ´ 20, 1e ´
30, 1e ´ 50u. For dataset of WMT18/19 the training process is on 3 GPUs and the batch size on each GPU is 5. Specifically, for WMT21 MQM dataset the batch size is 32 and the learning rate is 2e-6. The training is on 4 GPUs and the batch size on each GPU is 8. The code of this module is implemented based on awesome-align (Dou and Neubig, 2021)
9. This word-level faithfulness training continues on the basis of the best checkpoint of sentence-level training.
| Hyperparameters | Values |
|-------------------|----------|
| Epoch | 1 |
| Evaluation Step | 100 |
| Batch Size | 10 |
| Learning Rate | 1e-6 |
| Warmup Steps | 1000 |
| α | 30 |
| k | 7 |
| Random Seed | 42 |
| Hyperparameters | Values |
|-------------------|------------|
| Epoch | 1 |
| Batch Size | 15(32) |
| Learning Rate | 1e-6(2e-6) |
| Warmup Steps | 200 |
| c1 | 1e-3 |
| c2 | 1e-20 |
| Random Seed | 42 |
Epoch 1
Batch Size 15(32)
Learning Rate 1e-6(2e-6)
Warmup Steps 200
c1 1e-3 c2 1e-20
Random Seed 42
Table 6: Hyperparameters for word-level training of
WMT18/19.
## C.4 Details Of Experimental Results
In Section 3, as we want to demonstrate the improvement of our ReFreeEval in multilingual setting of all language directions, we report results corresponding to the highest "average" of all language pairs for each dataset.
Model Zh-En En-Zh De-En En-De
word-level 0.0948 0.1355 0.3337 0.3413
sent-level 0.1798 0.2749 0.4144 0.5817
ReFreeEval **0.1857 0.2943 0.4154 0.5995**
Table 7: Segment-level best results of ReFreeEval on
each language direction for WMT18
Table 7, 8, 9 are the best results of each language direction of WMT18/19/21 dataset.
9https://github.com/neulab/awesome-align/tree/
xlmr
| Model | Zh-En | En-Zh | De-En | En-De |
|------------------------------------------------------|---------|---------|---------|---------|
| word-level | 0.1864 | 0.0382 | 0.0517 | 0.1894 |
| sent-level | 0.3032 | 0.2387 | 0.0807 | 0.3013 |
| ReFreeEval 0.3195 | 0.2561 | 0.0831 | 0.3041 | |
| Table 8: Segment-level best results of ReFreeEval on | | | | |
Model **Zh-En En-De Zh-En En-De**
w/o HT w/o HT w/ HT w/ HT
word-level 0.2489 0.0388 0.2425 0.0196 sent-level 0.2601 0.0988 0.2520 0.0819 ReFreeEval **0.2684 0.1189 0.2603 0.1080**
Table 9: Segment-level best results of ReFreeEval on
each language direction for WMT21.
## D Analysis D.1 Analysis Of Data Augmentation
We compare our clause permutation with tokenlevel data augmentation methods shuffling and repetition. The results are displayed in Table 10.
For the fluency module alone, our clause-based augmentation method performs much better than the others, which suggests that our method provides more proper and valuable fluency information than others. As for sentence-level faithfulness, we compare the variation of sentence semantic similarity in Table 11. The disturbance caused by token shuffling is too great while our clause permutation is small. The obvious disturbance is easy to be distinguished and learned. While the disturbance caused by our method can hardly be distinguished by sentence similarity thus only this module is not enough.
However, with the clause permutation method, the combination of both fluency and sentence-level faithfulness outperforms others a lot. This verifies that our clause-based augmentation method is effective.
Based on the linguistic definition of clauses, our clause permutation approach can effectively incorporate perturbation to continuity and smoothness, which constitute the essence of fluency. This approach is simple and intuitive, making it a suitable choice for the preliminary step for more in-depth investigations about realistic perturbations.
## D.2 Balance Between Fluency Discrimination And Faithfulness Distillation
For sentence-level training, we adjust the hyperparameter α to balance fluency and faithfulness.
A small α means the sentence-level training mainly focuses on classification, which may neglect the semantic meaning of perturbed samples as we explained in section2.3. While a large α weakens the effect of hard classification labels, the soft similarity is also not enough for sentence-level training.
From Table 12, we can conclude that only by keeping the balance between hard fluency discrimination and soft faithfulness distillation can we achieve excellent experimental results.
## D.3 Control Over Difficulty Of Negative Samples In Word-Level Faithfulness
We experiment with different settings of threshold c2 in word-level faithfulness to observe the influence of the difficulty of negative samples.
A small c2 reduces the difficulty of contrastive learning. This setting includes negative samples whose unmatched relations can be easily distinguished. While a large c2 restricts the negative samples extremely, which may lose some useful information. The results in Table 13 indicate that properly controlling the difficulty of negative samples can lead to great performance on the whole.
However, for En-De, a small threshold is beneficial to improve the results. This may be because negatives without strict limitations are harmful to contrastive learning due to specific language features of En-De.
## E Significance Test
Our experimental results above are based on the model training in a single run with random seed 42. In this section, we implement the statistical significance test following (Dror et al.,
2018) to further compare the performance of our ReFreeEval with a strong baseline SentSim. We run both the models 10 times with random seed P t19, 27, 42, 55, 76, 80, 99, 153, 178, 200u. The p-value of the statistical test is displayed in Table 14. As we can see, the p-value on each language pair is well below the significance level of 0.05, which indicates that the results of our ReFreeEval are significantly better than SentSim.
Data Aug Method Model Zh-En En-Zh De-En En-De
fluency 0.1371 0.2503 0.3733 0.4751
Permutation sent-fa 0.1169 0.1759 0.3529 0.4319
sent-level 0.1798 0.2749 0.4144 0.5817
fluency 0.1055 0.1815 0.3491 0.3749
Shuffle sent-fa 0.0809 0.0720 0.3100 0.3034
sent-level 0.1469 0.2238 0.3729 0.4757
fluency 0.1106 0.1586 0.3359 0.4048
Repetition sent-fa 0.1305 0.1979 0.3642 0.4552
sent-level 0.0654 0.0716 0.2846 0.2847
Table 10: Segment-level metric results for WMT18: absolute Kendall's Tau formulation with different data
augmentation methods.
Table 11: The variation of sentence similarity due to data augmentation compared with original data.
α **Zh-En En-Zh De-En En-De**
0 0.1425 0.2564 0.3774 0.4951
5 0.1636 0.2538 0.3971 0.5656
10 0.1664 0.2522 0.3979 0.5692
50 **0.1800 0.2798 0.4133 0.5843**
100 0.1608 0.2613 0.3918 0.5524
500 0.1277 0.2001 0.3615 0.4628
Table 12: The influence of different α setting on
WMT18 segment-level metrics results.
c2 **Zh-En En-Zh De-En En-De Avg**
1e-5 0.1707 0.2482 0.4039 **0.6019** 0.3562
1e-10 0.1768 0.2752 0.4133 0.5903 0.3639
1e-15 0.1778 0.2798 0.4134 0.5845 0.3639
1e-20 **0.1813 0.2920 0.4154** 0.5884 **0.3693** 1e-30 0.1806 0.2766 0.4072 0.5757 0.3600
1e-50 0.1744 0.2701 0.3898 0.5663 0.3502
0 0.0883 0.2077 0.2781 0.4998 0.2685
Table 13: The influence of different threshold c2 setting
on WMT18 segment-level metrics results.
| Permutation | Shuffle | Repetition | |
|---------------|-----------|--------------|---------|
| Variation | -0.0213 | -0.6285 | -0.0380 |
| Zh-En | En-Zh | De-En | En-De | |
|-------------------------------------------------------|----------|----------|---------|----------|
| WMT18 | 7.02e-13 | 1.78e-14 | 3.43e-2 | 2.02e-16 |
| WMT19 | 3.79e-17 | 5.72e-15 | 2.09e-2 | 1.93e-12 |
| WMT21 w/o HT | 5.36e-13 | - | - | 6.26e-16 |
| WMT21 w/ HT | 8.44e-13 | - | - | 1.36e-15 |
| Table 14: p-value of significance test on WMT18/19/21 | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section: Limitations
✗ A2. Did you discuss any potential risks of your work?
No potential risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section: Abstract and Section1: Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Mainly in Section3: Experiments and Appendix D: Experimental Details. Also Section1: Introduction and Section2: Approach mention models.
✓ B1. Did you cite the creators of artifacts you used?
Section Reference and footnotes of Section2 and Section 3.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section Reference and footnotes of Section2 and Section 3.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section2 and Section3
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Our data doesn't have these information.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section3 and Appendix D
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section3 and Appendix D.1
## C ✓ **Did You Run Computational Experiments?** Section3 And Appendixe / F
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix D.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section3 and Appendix D.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix F
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix D.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
sul-choi-2023-balancing | Balancing Lexical and Semantic Quality in Abstractive Summarization | https://aclanthology.org/2023.acl-short.56 | An important problem of the sequence-to-sequence neural models widely used in abstractive summarization is exposure bias. To alleviate this problem, re-ranking systems have been applied in recent years. Despite some performance improvements, this approach remains underexplored. Previous works have mostly specified the rank through the ROUGE score and aligned candidate summaries, but there can be quite a large gap between the lexical overlap metric and semantic similarity. In this paper, we propose a novel training method in which a re-ranker balances the lexical and semantic quality. We further newly define false positives in ranking and present a strategy to reduce their influence. Experiments on the CNN/DailyMail and XSum datasets show that our method can estimate the meaning of summaries without seriously degrading the lexical aspect. More specifically, it achieves an 89.67 BERTScore on the CNN/DailyMail dataset, reaching new state-of-the-art performance. Our code is publicly available at \url{https://github.com/jeewoo1025/BalSum}. | # Balancing Lexical And Semantic Quality In Abstractive Summarization
Jeewoo Sul and **Yong Suk Choi**∗
![0_image_0.png](0_image_0.png)
Department of Computer Science Hanyang University, Seoul, Korea
{jeewoo25, cys}@hanyang.ac.kr
## Abstract
An important problem of the sequence-tosequence neural models widely used in abstractive summarization is *exposure bias*. To alleviate this problem, re-ranking systems have been applied in recent years. Despite some performance improvements, this approach remains underexplored. Previous works have mostly specified the rank through the ROUGE
score and aligned candidate summaries, but there can be quite a large gap between the lexical overlap metric and semantic similarity. In this paper, we propose a novel training method in which a re-ranker balances the lexical and semantic quality. We further newly define false positives in ranking and present a strategy to reduce their influence. Experiments on the CNN/DailyMail and XSum datasets show that our method can estimate the meaning of summaries without seriously degrading the lexical aspect. More specifically, it achieves an 89.67 BERTScore on the CNN/DailyMail dataset, reaching new state-of-the-art performance. Our code is publicly available at https:
//github.com/jeewoo1025/BalSum.
## 1 Introduction
The performance of sequence-to-sequence
(Seq2Seq) neural models for abstractive summarization (Lewis et al., 2020; Nallapati et al.,
2016; See et al., 2017; Zhang et al., 2020) has improved significantly. The dominant training paradigm of Seq2Seq models is that of Maximum Likelihood Estimation (MLE), maximizing the likelihood of each output given the gold history of target sequences during training. However, since the models generate the sequence in an auto-regressive manner at inference, the errors made in the previous steps accumulate in the next step thereby affecting the entire sequence. This phenomenon is known as *exposure bias* (Bengio et al., 2015; Ranzato et al., 2016). To mitigate this
∗Corresponding author Figure 1: Distribution of z (%) for a base BART model on CNN/DM. Since a BART model generates a pool of 16 diverse beam search candidates, the X-axis ranges from 1 to 16. If z = 1, it means that both ROUGE and BERTscore are high. As z increases, the gap between ROUGE and BERTScore tends to increase. The Y-axis represents the proportion of z in the test set. The distribution for XSum is in Appendix A.
problem, re-ranking systems (Liu et al., 2021; Liu and Liu, 2021; Liu et al., 2022; Ravaut et al., 2022)
have recently been introduced to generate a more appropriate summary.
There are two training objectives for applying reranking to abstractive summarization: *contrastive* learning and *multi-task learning*. The contrastive learning-based approaches deploy margin-based losses. SimCLS (Liu and Liu, 2021) and BRIO-Ctr
(Liu et al., 2022) train a large pre-trained model, such as RoBERTa (Liu et al., 2019) and BART
(Lewis et al., 2020), to align the candidate summaries according to the quality. The authors use the ROUGE (Lin, 2004) score as a quality measurement. The multi-task learning-based approaches combine at least two losses that perform different roles. SummaReranker (Ravaut et al., 2022) minimizes the average over the binary cross-entropy losses optimized for each evaluation metric. In addition, BRIO-Mul (Liu et al., 2022) demonstrates that the combination of the contrastive and crossentropy loss works complementarily and has better performance.
In this paper, we analyze the three main drawbacks of existing re-ranking approaches. First, we argue that current methods focus excessively on ranking summaries in terms of lexical overlap. Inspired by Zhong et al. (2020), we conduct a preliminary study, by sorting candidate summaries in descending order based on the ROUGE score and then defining z as the rank index of the highest BERTScore summary. As demonstrated in Fig. 1, we can observe that there is a large gap between lexical overlap and semantic similarity. In a majority
(52%) of cases z > 1. Second, despite more than half of the candidates with the same ROUGE score, previous studies do not accurately reflect quality measurements as they are trained with different ranks even if they have equal scores (Appendix F).
Lastly, for the first time, we find summaries with high lexical overlap but low semantic similarity as false positives (Appendix G). They can be noises during training phrase, which are not considered substantially in the prior works.
To address these issues, we propose a novel training method in which a re-ranker balances lexical and semantic quality. Based on a two-stage framework, our model, named *BalSum*, is trained on multi-task learning. We directly reflect the ROUGE
score difference on a ranking loss to preserve the lexical quality as much as possible. Then, we use a contrastive loss with instance weighting to identify summaries whose meanings are close to the document. Specifically, we define novel false positives (semantic mistakes) and present a strategy to reduce their influence in ranking. Experiments on CNN/DM and XSum datasets demonstrate the effectiveness of our method. Notably, BalSum achieves an 89.67 BERTScore on CNN/DM, reaching a new state-of-the-art performance.
## 2 Method
Our method follows the two-stage framework.
Given a source document D, a function g is to generate a pool of candidate summaries C =
{C1, C2*, ..., C*m} at the first stage:
C ← g(D) (1)
Then, a function f is to assign scores to each candidate and select the best summary C∗ with the highest score at the second stage:
![1_image_0.png](1_image_0.png)
Our goal is to train the ranking model f that identifies the correct summary from the outputs of the generation model g.
## 2.1 Model Architecture
We start with a bi-encoder using RoBERTa-base
(Liu et al., 2019) as a back-bone neural network.
Inspired by Khattab and Zaharia (2020), we aim to capture rich semantic units at the sentence level.
As shown in Fig. 2, we insert the *[CLS]* tokens in front of K sentences in the document D to let them encode into multi-vector representations. Then, we compute the individual score *Score*k which is modeled as an inner-product:
$$S c o r e_{k}=s i m(E_{1}(C_{i}),E_{k}(D))\qquad(3)$$
where E1(Ci) and Ek(D)(k = 1, 2*, ..., K*) mean the representations of *[CLS]* tokens for candidate summary Ci and document D, respectively. We calculate the similarity score f(Ci, D):
$$f(C_{i},D)=\sum_{k=1}^{K}\frac{Score_{k}}{\sum_{j=1}^{K}Score_{j}}Score_{k}=\sum_{k=1}^{K}w_{k}\cdot Score_{k}\tag{4}$$
In Appendix E, we show that our model can capture more information from documents at the sentence level.
## 2.2 Training Objective
Ranking Loss The core idea is that the higher the quality of the candidate summary, the closer to
the document. We introduce a ranking loss to $f(\cdot)$: $$\begin{array}{rcl}\mathcal{L}_{rank}&=&\sum_{i}\sum_{j>i}max(0,f(C_{j},D)-f(C_{i},D)\\ &&+(-cost(C_{i},S)+cost(C_{j},S))*\lambda)\end{array}\tag{5}$$ where $S$ is the reference summary and $\lambda$ is the hyper-parameter.[1] Here, $cost(C_{i},S)=1-$
$$(1)$$
$1\,0.1$ on XSum.
1We set λ to 1.0 on CNN/DM and 0.1 on XSum.
$$\mathbb{C}\gets g(D)$$
$$C^{*}={\underset{C_{i}\in\mathbb{C}}{\operatorname{argmax}}}\{f(C_{i},D)\}$$
{f(Ci, D)} (2)
$$(2)$$
![2_image_0.png](2_image_0.png)
M(Ci, S) is the margin, and M is the automatic evaluation metric. We define it as ROUGE. We use the same metric in previous work (Liu and Liu, 2021; Liu et al., 2022), but the difference is that our loss directly reflects the quality measure during training. In other words, the quality was not properly reflected before because different margin
((j − i) ∗ λ) was assigned even if the candidate summaries had the same ROUGE score.
## Contrastive Loss With Instance Weighting The
construction of positive and negative pairs is the critical point in constrative learning. Therefore, we consider generated summaries from the same document as *positive samples* and irrelevant summaries from other documents as *negative samples*.
Thus, we design a set of candidate summaries C
in Eq. 1 as *positive* and a set of randomly sampled summaries N as *negative*.
2 To identify summaries whose meanings are close to the document, we introduce a contrastive learning objective with instance weighting:
$$\mathcal{L}_{ctr}=\frac{1}{|\mathbb{C}|}\sum_{C_{i}\in\mathbb{C}}-log\frac{\alpha_{C_{i}}\times e^{f(C_{i},D)}}{ef(C_{i},D)+\sum_{s_{i}\in N}e^{f(s_{i},D)}}\tag{6}$$
We newly define summaries that have a high lexical matching but a low semantic similarity as false positives. Inspired by Zhou et al. (2022), we design an instance weighting method to reduce the influence of false positives. We produce the weights for positives using the SimCSE (Gao et al., 2021)
which is the state-of-the-art model for the sentence representation task:
$$\alpha_{C_{i}}={\begin{cases}0,&s i m(C_{i},S)<\phi\\ 1,&s i m(C_{i},S)\geq\phi\end{cases}}\quad\quad(7)$$
where ϕ is a hyper-parameter of the instance weighting threshold, and sim(·) is the cosine similarity score evaluated by the SimCSE model.
Finally, as shown in Fig. 3, we combine the ranking (Eq. 5) and contrastive (Eq. 6) losses:
$${\mathcal{L}}=\gamma_{1}{\mathcal{L}}_{r a n k}+\gamma_{2}{\mathcal{L}}_{c t r}\qquad\qquad(8)$$
where γ is the scale factor of each loss and we find the optimal values (γ1 = 10, γ2 = 0.1) in Appendix H.
## 3 Experiments 3.1 Datasets
We experiment on two datasets, whose statistics are shown in Appendix C.
CNN/DailyMail (Hermann et al., 2015) is the most commonly used summarization dataset which contains articles from the CNN and DailyMail newspapers.
XSum (Narayan et al., 2018) is a one-sentence summary dataset from the British Broadcasting Corporation (BBC) for the years 2010 - 2017.
## 3.2 Training Details
We use diverse beam search (Vijayakumar et al.,
2016) to generate 16 candidate summaries. We start from pre-trained checkpoints of RoBERTabase (Liu et al., 2019). We train BalSum for five epochs. It takes 33 hours on CNN/DM and 22 hours on XSum on a single RTX 3090 GPU. More details are described in Appendix D.
## 3.3 Main Results
In terms of the two-stage framework, we compare our results with SimCLS (Liu and Liu, 2021), SummaReranker (Ravaut et al., 2022), and BRIO (Liu et al., 2022). We apply BalSum on top of each base model which is BART or PEGASUS.
The results on CNN/DM are described in Table 1. BalSum outperforms a base BART model, according to gains of 2.54/1.27/2.63 R-1/2/L. Notably, while it has comparable performances on ROUGE to previous models, it achieves an 89.67 BERTScore, reaching a new state-of-the-art performance. When ranking the candidate summaries, our model can estimate the meaning of summaries
| Model | R-1 | R-2 | R-L | BS | ϕ | N/A | 0.7 | 0.75 | 0.8 | 0.85 | 0.9 |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|--------|--------|--------|-----|-------|-------|--------|-------|--------|-------|
| BART* | 44.16 | 21.28 | 40.90 | - | | | | | | | |
| BART‡ | 44.04 | 21.06 | 40.86 | 88.12 | | | | | | | |
| Pegasus* | 44.16 | 21.56 | 41.30 | - | | | | | | | |
| BRIO-Mul* | 47.78 | 23.55 | 44.57 | - | | | | | | | |
| BRIO-Mul‡ | 47.50 | 23.48 | 44.01 | 89.08 | | | | | | | |
| BRIO-Ctr* | 47.28 | 22.93 | 44.15 | - | | | | | | | |
| BRIO-Ctr‡ | 47.08 | 23.03 | 44.06 | 89.03 | | | | | | | |
| SummaReranker* | 47.16 | 22.55 | 43.87 | 87.74 | | | | | | | |
| SimCLS* | 46.67 | 22.15 | 43.54 | - | | | | | | | |
| SimCLS‡ | 46.34 | 22.07 | 43.30 | 88.92 | | | | | | | |
| BalSum | 46.58† | 22.33† | 43.49† | 89.67† | BS | 89.37 | 89.35 | 89.36 | 89.63 | 89.37 | 89.67 |
| Table 3: BERTScore (noted BS) results with different weighting threshold ϕ on CNN/DM. "N/A": no instance weighting. Model BS@1 BS@3 BS@5 R@1 R@3 R@5 Oracle (R) 90.77 90.42 90.18 44.85 42.68 41.16 Oracle (BS) 91.06 90.66 90.38 43.32 41.46 40.18 SimCLS 88.92 88.87 88.82 37.24 36.95 36.65 BRIO-Ctr 89.03 88.93 88.85 38.06 37.55 37.14 BalSum 89.67 89.60 89.54 37.46 37.08 36.78 | | | | | | | | | | | |
| Table 1: Results on CNN/DM. R-1/2/L are the ROUGE1/2/L F1 scores. BS denotes BERTScore. *: results | | | | | | | | | | | |
Table 1: **Results on CNN/DM**. R-1/2/L are the ROUGE1/2/L F1 scores. BS denotes BERTScore. *: results reported in the original papers. ‡: results from our own evaluation script. †: significantly better than the baseline model (BART).
Model R-1 R-2 R-L BS BART* 45.14 22.27 37.25 -
Pegasus* 47.21 24.56 39.25 - Pegasus‡46.82 24.44 39.07 91.93 BRIO-Mul* 49.07 25.59 40.40 -
BRIO-Mul‡**48.74 25.38 40.16 92.60**
BRIO-Ctr* 48.13 25.13 39.84 - BRIO-Ctr‡48.12 25.24 39.96 91.72 SummaReranker* 48.12 24.95 40.00 92.14
SimCLS* 47.61 24.57 39.44 -
SimCLS‡47.37 24.49 39.31 91.48 BalSum 47.17†24.23 39.09 91.48
Table 2: **Results on XSum**. R-1/2/L are the ROUGE1/2/L F1 scores. BS denotes BERTScore. *: results reported in the original papers. ‡: results from our own evaluation script. †: significantly better than the baseline model (PEGASUS).
without seriously degrading the lexical aspect. We argue that this is because BalSum decreases more false positives than other ranking models. We provide fine-grained analyses for this result and present a case study in Sec.3.4.
In addition, we apply our method on XSum, as shown in Table 2. Though we use a different strategy to generate the validation and test data 3, our method improves a base PEGASUS with a small margin. We believe the one of reasons is that XSum is restricted to capturing diverse semantic units because it consists of much shorter summaries (onesentence) than CNN/DM.
Table 4: Analysis of re-ranking performance on CNN/DM. BS and R denote BERTScore and the mean ROUGE F1 score, respectively. Oracle (R) is ordered by ROUGE scores, while Oracle (BS) is ordered by BERTScore.
## 3.4 Analysis
Weighting Threshold ϕ Intuitively, the larger the weighting threshold, the lower false positives.
We train our model with different instance weighting thresholds from 0.7 to 0.9. In Table 3, the highest threshold (ϕ = 0.9) shows the best performance and it rises largely to 0.3 BERTScore compared to when not applied. We also find that increasing the threshold leads to performance improvement. Therefore, we demonstrate that false positives can be considered noise in training.
Ranking Evaluation Regardless of the number of candidates, an ideal ranking model should yield oracle results considering diverse aspects of summarization. We conduct an experiment to measure the qualities by selecting the top-k summaries after aligning the candidates through different models. As shown in Table 4, we can see that our model shows consistent performance in both evaluation metrics depending on the k (about ±0.06 BERTScore, ±0.34 ROUGE average score). Compared to SimCLS and BRIO-Ctr, the second block in Table 4 demonstrates that BalSum captures semantic similarity best while maintaining the intermediate level from the perspective of lexical overlap quality. Moreover, we find that BalSum has the lowest drop ratio of BERTScore (−1.52%) from the perfect ranking "oracle" scores.
We also investigate whether all ranked summaries by models satisfy both lexical and semantic quality. We evaluate models using F1 which measures the cases where the higher-ranked summary
| CNNDM | XSum | | | |
|----------|--------|-------|-------|-------|
| Model | F1 | FP(%) | F1 | FP(%) |
| BRIO-Ctr | 78.50 | 10.96 | 76.95 | 10.01 |
| BalSum | 78.84 | 10.73 | 76.32 | 10.49 |
has both larger ROUGE and BERTScore than the lower-ranked summary. In addition, we calculate the percentage of false positives. Following Table 5, while BalSum has worse (+0.48% FP, −0.63 F1) than BRIO-Ctr on XSum, it has better ranking performance (−0.23% FP, +0.34 F1) on CNN/DM.
We observe that the decrease of false positives leads to an improvement in F1 score, demonstrating that the result of Table 1 can be interpreted as reducing semantic mistakes in ranking. As a result, we find that (1) our model is able to learn how to score each summary by balancing the lexical and semantic quality, and (2) the other reason of weak performance on XSum is related to small decline of false positives compared to CNN/DM.
Case Study on CNN/DM Table 10 presents an intriguing pattern we observed when comparing the results of BRIO-Ctr and BalSum, which demonstrate that our model helps to capture precise details from documents. While BRIO-Ctr contains some irrelevant information in the summaries (shown as highlighted text in blue), BalSum selects the summaries where the last sentence is more consistent with the reference (shown as highlighted text in yellow). Furthermore, despite the comparable ROUGE scores of both models, we note that BalSum's selected summaries consistently have higher BERTScore than those of BRIO-Ctr.
## 4 Conclusion
In this work, we propose BalSum which aims to evaluate summaries by considering the balance between lexical and semantic quality. To achieve this, we perform a multi-task learning, which aligns summaries according to their lexical overlap qualities and identifies whether they are similar to the document. In addition, to our best knowledge, our method is the first attempt to present a new perspective of false positives (semantic mistakes) in ranking and creating the model to reduce their influence. Our experimental results and fine-grained analyses validate that our model achieves consistent improvements over competitive baselines.
## Limitations
Candidate Summaries Dependency While we mainly investigate a training objective to select the best summary among a set of candidates, we find that our model has been dependent on those obtained from the generation model. Recently, several works have been presented to improve language generation. For example, Narayan et al. (2022) and Xu et al. (2022) improve decoding methods to generate diverse outputs. It will be beneficial when applying our method to these approaches.
One-sentence Summary Our approach can fail to capture the information from an extremely short summary. Since Table 2 shows that our approach has a smaller improvement than CNN/DM, we plan to investigate that our model aims to capture more detailed features from an input text.
## Acknowledgements
We thank Soohyeong Kim and anonymous reviewers for valuable feedback and helpful suggestions.
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(*MSIT) (No.2018R1A5A7059549
, No.2020R1A2C1014037) and supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(*MSIT) (No.2020-0-01373).
*Ministry of Science and ICT
## References
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In *Advances in Neural Information Processing Systems*,
volume 28. Curran Associates, Inc.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *Advances in Neural Information*
Processing Systems, volume 28. Curran Associates, Inc.
Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. SIGIR '20, page 39–48, New York, NY, USA. Association for Computing Machinery.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Yixin Liu, Zi-Yi Dou, and Pengfei Liu. 2021. RefSum:
Refactoring neural summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1437–1448, Online. Association for Computational Linguistics.
Yixin Liu and Pengfei Liu. 2021. SimCLS: A simple framework for contrastive learning of abstractive summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1065–1072, Online. Association for Computational Linguistics.
Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing order to abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics.
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Çaglar Guçlçehre, and Bing Xiang. 2016. ˘ Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany.
Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018*
Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Shashi Narayan, Gonçalo Simões, Yao Zhao, Joshua Maynez, Dipanjan Das, Michael Collins, and Mirella Lapata. 2022. A well-composed text is half done!
composition sampling for diverse conditional generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1319–1339, Dublin, Ireland. Association for Computational Linguistics.
Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In *4th International Conference on Learning Representations,*
ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022.
SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 4504–4524, Dublin, Ireland.
Association for Computational Linguistics.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Noam Shazeer and Mitchell Stern. 2018. Adafactor:
Adaptive learning rates with sublinear memory cost.
In *Proceedings of the 35th International Conference* on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4596–4604.
PMLR.
Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David J.
Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. *CoRR*, abs/1610.02424.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Jiacheng Xu, Siddhartha Jonnalagadda, and Greg Durrett. 2022. Massive-scale decoding for text generation using lattices. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4659–4676, Seattle, United States. Association for Computational Linguistics.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In *International Conference on Machine Learning*, pages 11328–11339. PMLR.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2019. Bertscore:
Evaluating text generation with BERT. *CoRR*,
abs/1904.09675.
Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197–6208, Online.
Association for Computational Linguistics.
Kun Zhou, Beichen Zhang, Xin Zhao, and Ji-Rong Wen.
2022. Debiased contrastive learning of unsupervised sentence representations. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6120–
6130, Dublin, Ireland. Association for Computational Linguistics.
## A Distribution Of Z **On Xsum**
The result in Fig. 4 shows that there is a majority
![6_image_0.png](6_image_0.png)
(53%) of cases where z > 1.
Figure 4: Distribution of z(%) for a base PEGASUS
model on XSum. Because a PEGASUS model generates a pool of 16 diverse beam search candidates, the Xaxis ranges from 1 to 16. The Y-axis represents the proportion of z in the test set.
## B Evaluation Metrics
We examine our model with two evaluation metrics.
- **ROUGE** (Lin, 2004) is a widely used metric for summarization evaluation. We use the standard ROUGE Perl package4for evluation.
- **BERTScore** (Zhang et al., 2019) is a semantic similarity metric for multiple tasks. We use the public *bert-score* package5shared by the authors.
## C Datasets Statistics
| Dataset | Train | Valid | Test |
|-----------|---------|---------|--------|
| CNN/DM | 287,227 | 13,368 | 11,490 |
| XSum | 204,045 | 11,332 | 11,334 |
Table 6: Statistics of two datasets
## D Implementation Details
Model We implement our model based on Huggingface Transformers library (Wolf et al., 2020).
We use the pre-trained RoBERTa with 'robertabase' version, containing around 125M parameters. Our experiments are conducted on a single NVIDIA RTX 3090 GPU with 24GB memory.
Decoding Setttings We use the diverse beam search algorithm (Vijayakumar et al., 2016) to decode summaries. We generate candidate summaries from 16 diversity groups with 16 beams.
On CNN/DM and XSum, we use the pre-trained BART6and PEGASUS7 models as the generation model.
Training Settings We train our models for 5 epochs using an Adafactor optimizer (Shazeer and Stern, 2018). The batch size is 4 and the learning rate is 2e-3. During training, we randomly select 4 negative samples for each input document. We evaluate the model every 1000 steps on the validation set.
## E Effect Of Model Architecture
We train BalSum with different model architectures and evaluate them on CNN/DM test set. For a fair comparison, we use only ranking loss in Eq. 5.
Table 7 shows that taking the weighted sum of scores in Eq. 4 leads to better performance than others.
| Model | R-1 | R-2 | R-L |
|---------|-------|-------|-------|
| [CLS] | 45.40 | 21.18 | 42.36 |
| Avg. | 46.59 | 22.40 | 43.47 |
| Ours | 46.64 | 22.38 | 43.52 |
## F Identical Candidates Scores
As shown in Table 8, we note cases that have at least two identical R-avg on CNN/DM and XSum are a majority. Since we count after removing the same summaries in the pool, we ensure that it is the number of summaries with different content but the same R-avg score.
| Dataset | Decoding methods | # Summary candidates | # of pools with at least two same R-avg (%) |
|-----------|---------------------|------------------------|-----------------------------------------------|
| CNN/DM | Diverse beam search | 16 | 46.09 |
| Xsum | Diverse beam search | 16 | 73.01 |
## G Examples For False Positive
Table. 9 shows that \#2 has 2.33 R-avg lower than
\#1, but 3.67 BERTScore higher. Also, when evaluated qualitatively, it can be seen that \#2 is closer to the gold summary. While the sentence in green is discarded, the sentence in red is included in the reference summary.
## H Negative Size And Scale Factors
We have tuned the scale factor γ1 of ranking loss
![7_image_0.png](7_image_0.png)
and γ2 of contrastive loss in Eq. 8 with different sizes of negative samples. As shown in Fig. 5, suitable scale factors (γ1 = 10, γ2 = 0.1) can improve more than others. Though *size* = 4 and size = 12 showed similar performance, we set the negative size to 4 due to memory efficiency.
## I Number Of Candidate Summaries
We set the size of the candidate summary pool to
![7_image_1.png](7_image_1.png)
16, as it is close to the maximum which could fit in a standard 24GB RAM GPU. Fig. 6 reports that our method is robust to the number of candidates.
| System | R-avg | BS | Summary |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Reference | − | − | Didier Drogba played first Chelsea game after joining on free from Galatasaray. Ivory Coast striker was second half substitute for Diego Costa in 3-0 defeat by Werder Bremen. John Terry handed him captaincy later in game, but 36-year-old failed to trouble German side in front of goal. |
| 30.72 | 87.50 | Ivory Coast striker made his second return to the club. Drogba was a half-time substitute in the 3-0 | |
| Diverse beam #1 | defeat at the Weserstadion. The 36-year-old was replaced by Diego Costa at half- time. Dobar was the first player on the pitch after John Terry left. | | |
| 28.39 | 91.17 | Didier Drogba made his second Chelsea debut in pre-season friendly at Werder Bremen. The 36-yearold was a half-time substitute as Chelsea lost 3-0. Drogbba was captain after John Terry left the pitch | |
| Diverse beam #2 | in the second half. The Ivorian striker missed a penalty and failed to make an impact on the game. | | |
Table 9: False positive examples from fine-tuned BART model on CNN/DM. **R-avg** is the average of ROUGE-1/2/L
scores. BS denotes BERTScore. The related sentences in the reference are in **bold**.
| System | R-1 | R-2 | R-L | BS | Summary |
|----------------------------------------------------------------------------------------------------------|-------|-------|-------|-------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Reference | - | - | - | - | arsene wenger will have chat with theo walcott ahead of arsenal clash. walcott was substituted after 55 minutes of england's draw with italy. arsenal boss is wenger is concerned by the winger's confidence. the gunners take on liverpool at the emirates stadium on saturday. |
| BRIO-Ctr | 60.61 | 41.24 | 46.46 | 89.93 | theo walcott played just 55 minutes of england's 1-1 draw with italy. arsene wenger says he is concerned by the winger's confidence. the arsenal manager will speak with walcott ahead of liverpool clash. walcott could start against liverpool on saturday with alex oxlade-chamberlain out and danny welbeck a doubt. |
| BalSum | 61.54 | 38.20 | 41.76 | 92.36 | arsenal winger theo walcott struggled for england against italy. arsene wenger says he is concerned by the winger's confidence. walcott was replaced after 55 minutes of england's 1-1 draw in turin. the gunners face liverpool on saturday in a top-four clash. |
| Reference | - | - | - | - | experts have voiced concerns over diy brain stimulation kits for children. for a few hundred dollars, one can be purchased online from various sites. it promises to help children with math homework and claims to help adhd. professor colleen loo from the black dog institute strongly believes that the equipment poses a danger to amateurs and children. the equipment is currently being used to treat people with speech impediments but is still very much in trial stages. |
| BRIO-Ctr | 40.0 | 16.26 | 19.20 | 87.11 | for a few hundred dollars, you can purchase a brain stimulation kit online. experts have voiced concerns over the potential side effects. the kits are being sold online for as little as $ 55 us. one site even advertises how to make your own electrodes using a household sponge. |
| BalSum | 36.92 | 17.19 | 27.69 | 89.90 | parents are buying diy brain stimulation kits for their children. the kits are being sold online for as little as $ 55 us. experts are concerned about the potential side effects of the equipment. the devices are used to improve speaking in those with speech problems. the equipment is still relatively new and experimental. |
| Reference | - | - | - | - | ross barkley has been repeatedly linked with a move to manchester city. former city star gareth barry says his everton team-mate is too young. the toffees face manchester united in the premier league on sunday. |
| BRIO-Ctr | 47.19 | 27.59 | 29.21 | 88.85 | everton team-mate gareth barry has advised ross barkley against moving to manchester city. the 21-year-old has been linked with a move away from goodison park. barry believes it is too early for the youngster to decide on his future. the veteran midfielder spent four seasons at the etihad before joining everton. |
| BalSum | 46.34 | 25.0 | 34.15 | 91.16 | gareth barry has advised ross barkley against moving to manchester city. the everton midfielder believes it is too early for the 21-year-old to decide on his future. barry spent four seasons at the etihad before arriving on merseyside. the toffees face manchester united on sunday. |
| Reference | - | - | - | - | local councils are urged to draw up maps of the residents who are at risk. essex and gloucestershire have already made 'loneliness maps' experts warn that being lonely can lead to serious health problems. |
| BRIO-Ctr | 50.57 | 28.24 | 29.89 | 90.30 | two county councils have already implemented 'loneliness maps' to target 'danger zones' being lonely can lead to health problems including dementia and high blood pressure. campaigners say councils should draw up maps of the places where pensioners are most at risk. study by university of kent and campaign to end loneliness recommends maps. |
| BalSum | 50.0 | 27.91 | 43.18 | 91.28 | campaigners say councils should draw up maps of places where pensioners and others are most likely to suffer from social isolation. two county councils, essex and gloucestershire, have already implemented the maps. they allow them to target 'danger zones' of loneliness. being lonely can lead to health problems including dementia and high blood pressure. |
| Reference | - | - | - | - | the gruesome vision was captured in australia and uploaded last week. the lizard swings its neck back and forth in a bid to swallow the rabbit. goannas can unhinge their lower jaws allowing them to swallow large prey. |
| BRIO-Ctr | 51.16 | 23.81 | 27.91 | 88.75 | two-metre long reptile is filmed balancing on top of a power pole to swallow rabbit. the lizard swings its neck back and forth as it battles to swallow its catch. it finishes the feat in under a minute, and the video was uploaded to youtube last week. |
| BalSum | 46.91 | 20.25 | 34.57 | 90.72 | two-metre long lizard filmed battling to swallow rabbit in under one minute. video shows lizard balance at the top of a power pole while swallowing its prey. goannas can unhinge their lower jaws when feeding, allowing them to eat over-sized prey. |
| Table 10: Case Study on CNN/DM. R-1/2/L are the ROUGE-1/2/L F1 scores. BS denotes BERTScore. The related | | | | | |
Table 10: **Case Study** on CNN/DM. R-1/2/L are the ROUGE-1/2/L F1 scores. BS denotes BERTScore. The related
sentences in the reference are in **bold**.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, 1. Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
3.1 Datasets, 3.2 Training Details, Appendix B. Evaluation Metrics, Appendix D. Implementation Details
✓ B1. Did you cite the creators of artifacts you used?
3.1 Datasets, 3.2 Training Details, Appendix B. Evaluation Metrics, Appendix D. Implementation Details
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3.1 Datasets, 3.2 Training Details, Appendix B. Evaluation Metrics, Appendix D. Implementation Details
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3.1 Datasets, 3.2 Training Details, Appendix D. Implementation Details B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3.1 Datasets
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix C. Datasets Statistics The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?**
3.2 Training Details, Appendix D. Implementation Details
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix H. Negative Size and Scale Factors
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3.2 Training Details, Appendix D. Implementation Details, Appendix H. Negative Size and Scale Factors
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3.3 Main Results, 3.4 Analysis
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix B. Evaluation Metrics, Appendix D. Implementation Details
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
agravante-etal-2023-learning | Learning Neuro-Symbolic World Models with Conversational Proprioception | https://aclanthology.org/2023.acl-short.57 | The recent emergence of Neuro-Symbolic Agent (NeSA) approaches to natural language-based interactions calls for the investigation of model-based approaches. In contrast to model-free approaches, which existing NeSAs take, learning an explicit world model has an interesting potential especially in the explainability, which is one of the key selling points of NeSA. To learn useful world models, we leverage one of the recent neuro-symbolic architectures, Logical Neural Networks (LNN). Here, we describe a method that can learn neuro-symbolic world models on the TextWorld-Commonsense set of games. We then show how this can be improved further by taking inspiration from the concept of proprioception, but for conversation. This is done by enhancing the internal logic state with a memory of previous actions while also guiding future actions by augmenting the learned model with constraints based on this memory. This greatly improves the game-solving agents performance in a TextWorld setting, where the advantage over the baseline is an 85{\%} average steps reduction and x2.3 average score. |
## Learning Neuro-Symbolic World Models With Conversational Proprioception
Don Joven Agravante and **Daiki Kimura** and **Michiaki Tatsubori**
and **Asim Munawar** and **Alexander Gray**
IBM Research [email protected], [email protected], [email protected], [email protected], [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
The recent emergence of Neuro-Symbolic Agent (NeSA) approaches to natural languagebased interactions calls for the investigation of model-based approaches. In contrast to modelfree approaches, which existing NeSAs take, learning an explicit world model has an interesting potential especially in the explainability, which is one of the key selling points of NeSA.
To learn useful world models, we leverage one of the recent neuro-symbolic architectures, Logical Neural Networks (LNN). Here, we describe a method that can learn neuro-symbolic world models on the TextWorld-Commonsense set of games. We then show how this can be improved further by taking inspiration from the concept of proprioception, but for conversation. This is done by enhancing the internal logic state with a memory of previous actions while also guiding future actions by augmenting the learned model with constraints based on this memory. This greatly improves the gamesolving agents performance in a TextWorld setting, where the advantage over the baseline is an 85% average steps reduction and ×2.3 average score.
## 1 Introduction
Recent emergence of neuro-symbolic (NS) approaches include natural language-based sequential decision making (Kimura et al., 2021b; Chaudhury et al., 2021; Kimura et al., 2021a). They propose a model-free approach of learning a logical policy, and tested with interactive-text games (Narasimhan et al., 2015; Côté et al., 2018; Hausknecht et al.,
2020; Murugesan et al., 2021), which have become an interesting benchmark in the intersection of natural language processing and sequential decision making. NS approaches give the direct explainability of what is learned and allow natural integration of external knowledge as logic. Despite that, existing NS approaches are of model-free reinforcement learning (RL) but it would be useful if we could have model-based approaches that are potentially more sample efficient and can reach higher cumulative rewards as shown by neural world models (Hafner et al., 2019; Łukasz Kaiser et al., 2020).
In contrast to these, a logical world model learned using NS approaches would allow an agent to use logical reasoning which enables us to obtain a trace of logical steps for better explainability. In fact, several sets of benchmarks and game environments have been proposed such as TextWorld (Côté et al., 2018), Jericho (Hausknecht et al., 2020)
and TextWorld Commonsense (TWC) (Murugesan et al., 2021), which are far too complicated to solve without reasoning and common sense, compared to the original game setting (Narasimhan et al., 2015).
Also, existing implementations of NS agents do not start from natural language but instead use the logical facts provided from the game engines.
In this paper, we focus on the problem of learn648 ing logical world models in NS methods. The main research question to be addressed is then how we can learn such models for text-based games using a general semantic parser. As a state-of-the-art interactive-text agent, GATA (Adhikari et al., 2020)
constructs belief graphs used to enhance deep RL
methods. In contrast to understanding the world state in a latent space, we want to explicitly use the logical world models to plan optimal action sequences and to provide direct explainability of the decision making policy. For the explainability purpose, we leverage general semantic parsing, following one of the early work constructing knowledge graphs (Ammanabrolu and Riedl, 2019).
An overview of our proposed method is depicted by Figure 1. The left side depicts that the environment state can be sufficiently approximated as a set of logical facts. Continuing in the top right, the agent can get textual observations of the environment. We assume that we have a *semantic* parser (Drozdov et al., 2022) that converts these observations into a logical form. In the real situation the semantic parsing is good, but won't be perfect, hence we require that our agent should be capable of handling noisy logical states. From such states, our agent should produce suitable actions for accomplishing its tasks in the environment.
The main contributions of this paper are: the proposal of a novel world model-learning method with a neuro-symbolic approach, and its experimental results with TWC.
## 2 Problem Definition
Text-based games are often modelled with the RL
problem setting in mind as Partially Observable -
Markov Decision Processes (PO-MDP) (Côté et al., 2018; Hausknecht et al., 2020). As a first approach, we add an assumption - that the semantic parser can remove partial observability and that we are dealing with an MDP. At each time step the agent uses the information in a state, s, to take an action, a, which transitions the state to the new state, s′
according to the state transition function T such that s′ = T(*s, a*). While acting in this environment the agent also gets rewards, r, according to an unknown reward function, R, such that r = R(*s, a*).
The training loop consists of exploring the environment by taking actions while keeping track of the experience in the form of {*s, a, r, s*′}. An agent then uses this set of experiences to learn something that enables it to take better actions. In the model-free RL setting, the agent learns a policy or value function which can directly govern the actions. Here, we are interested in the model-based RL setting where the agent learns a model of the world which usually consists of both T and R. This model can then be used with planning methods to find the optimal actions.
Based on the classical model-based RL setting, our problem has two more important specifications.
First, we assume that our environment is *relational*,
similar to (Lang et al., 2012). This means that all actions and states are composed of relational logic. They may be in the propositional form but there must be a corresponding lifted form that has a consistent *meaning*. For example, the propositional state, *on(book,table)* can be abstracted or *lifted* into on(x,y) with predicate, on, and the variables, (*x, y*).
The first assumption is that all states and actions handled by the agent are in this relational lifted form. This assumption can be handled as a design specification of the semantic parser. The second assumption is that the goal state is given. This is a weaker assumption that is already used in current RL research, the so-called *goal-conditioned RL*.
Here, it allows us to concentrate only on learning T since R is no longer required for planning when we are given the goal state.
## 3 Learning Logical World Models
The problem of learning logical rules that explain a given set of logical examples can be cast into the general problem called Inductive Logic Programming (ILP) (Muggleton and De Raedt, 1994).
What needs to be done is then to cast our relational model-based RL problem into ILP form. But before going into that detail, it is important to note that relying on classical ILP has significant failings.
In particular, it is not well suited to noisy data to the extent that a single erroneous data point may cause the whole system to fail.
However, newer methods that leverage neural networks have shown great promise on working even with noisy data (Evans and Grefenstette, 2018). These are sometimes called neural ILP, differentiable ILP or neuro-symbolic ILP. These advances are the main impetus for us to research on the learning of logical world models.
We may use any such ILP method that is noiseresistant but here we use the Logical Neural Network (LNN) (Riegel et al., 2020) as a NeuroSymbolic AI framework. It is an end-to-end differentiable system that enables scalable gradientbased learning and it has a real-valued logic representation of each neuron that enables logical reasoning (Riegel et al., 2020).
## Action Ilp With Lnn
Now, getting back to the task of expressing our relational model-based RL problem as ILP, we first gather data samples which are triples of lifted logic,
(*s, a, s*′). This is gathered by using an exploration policy to generate actions. This data collection may be done in an offline or online RL setting but we assume that a large enough *batch* is available in the online RL setting before we start the learning procedure. Here, we used a policy that uniformly randomly samples the action space but better exploration methods may be used, such as that outlined in (Lang et al., 2012). The improvement of better exploration is usually seen in *data efficiency* leading to faster convergence but using a sufficiently large amount of data won't change the benchmark scores on the TWC environment. We believe that a more rigorous treatment on the exploration of exponential but structured (logical) spaces merits its own research topic.
Given a batch of data samples, the learning procedure must produce an estimate of T. This T will be the *hypothesis* to be generated by our ILP. This is a set of logical rules that best fits the data. To make learning more efficient we need to narrow down the definition of T. Because we are ultimately interested in using T for planning, we define it as a set of planning operators where each one is a quadruple of (*α, β, γ, σ*). Each element is a set of logical conditions. The conditions (*α, β*) are preconditions where α are conditions that must be true for the action to be executable, β are ones that must be false. The conditions (*γ, σ*) are post-conditions where γ are ones made true by the action and σ are ones made false. These conditions are the lifted logic statements that comprise a state, s, and the set of all possible conditions is P.
We model each of the operator elements as an LNN conjunction operator whose inputs are P. The LNN learning procedure can learn weights for each of these inputs that correspond to real-valued logic
(Riegel et al., 2020; Sen et al., 2021). For the LNNs of α and β, the inputs are given the corresponding logical values of the conditions in s. The output is true when action, a, corresponds and s ̸= s′
otherwise it is false. For the LNNs of γ and σ, the inputs are given the logical values corresponding to the difference in the conditions of s and s′such that γ are the the conditions made true and σ those that are made false. The output is true when action, a, corresponds otherwise it is false.
Using these inputs and outputs to the LNN,
gradient-based optimization can be used for supervised learning (Riegel et al., 2020; Sen et al., 2021).
When learning converges, we have a set of weights for each of the corresponding elements. These may be interpreted as probabilistic transitions but here we simply threshold them and maintain a deterministic transition system for our final estimate of T.
Given this operator transition model and the goal, we can be in any state and use classical planning methods to find a series of actions to reach the goal.
## Conversational Proprioception As Memory-Based Constraints
To further enhance our logical model, we take inspiration from the concept of proprioception (Tuthill and Azim, 2018), which is the sensation of body position and movement critical to human experience, while it is typically absent from conscious perception. This concept is commonly used in imitation learning (Torabi et al., 2019) and in robotics (Cong et al., 2022). In these domains, the type of sensors clearly distinguish the internal state measurement
(proprioception) and external state measurement
(perception). Combining both information sources is crucial to improving an agent's world model. We take inspiration from this to improve our logical world model estimate for text-based games or other tasks with logical state representations.
In general, proprioception is a prediction of the next state, s′ = Tˆ(*s, a*), based on the existing knowledge of one's body dynamics in the form of the transition model estimate, Tˆ, the current state, s, and the action taken, a. This additional information is crucial to help us disambiguate and better locate the next state. For our task where T is a logical model, we propose to augment our learned T with a set of proprioception rules, ϵ(*s, a*), such that our T
will now be defined as (α, β, γ, σ, ϵ(*s, a*)). For our agent, we define ϵ very generally such that it only consists of 2 rules. First, it tracks state-action pairs that were already tried and augments the state with this information. This serves as a type of memory added onto the state. Second, it adds a precondition onto the transition models. This serves as a type of constraint on the actions. For our TWC agent, we defined preconditions that prevent state-action pairs from repeating. These 2 rules are general enough to apply to any TWC environment and possibly beyond to other conversational agents in general.
We leave the design of further proprioception rules as a possible future work.
## 4 Experiment And Discussion
For evaluating the quality of the world model learned, we first qualitatively analyze the learned action models. Then we measure the interactivetext agent performance against a quantitative TextWorld benchmark. In this paper, we experiment on the TextWorld Commonsense (TWC) set of games (Murugesan et al., 2021) with the same experimental settings.
Once we have a logical world model, we can use it with a planner. Here, we use the Fast-Downward system (Helmert, 2006). For convenience, we convert the learned logical transition model into the PDDL (Planning Domain Definition Language) format by combining (*α, β, ϵ*) into the preconditions and (*γ, σ*) into the effects. We also augment the state with ϵ(*s, a*).
## Learned Models
We confirmed that the world models were meaningfully learned by any of model-based approaches.
Figure 2 shows example learned action models in a converted PDDL form for an action *insert_into*
(insert XX into YY) by model-based approaches from AMR-based logical facts. For our results, we first show some examples of the learned rules in our logical world model in Figure 2. Here, we can visually inspect the validity of the rules. For example with the left case, the effect would be that the object v0 is at/in/on the container v1 (*has_location2*) but now it is no longer in the inventory (*carry-1*).
This level of explainability is inherent in logical models although it requires careful inspection.
The effect of proprioception can be seen in the right-hand side of Figure 2. The predicate of *tried_insert_into* is from an AMR-based predicate *insert_into* but with the intention modality of the agent, which is encoded in first-order logic.
This recognition of an already-performed action *insert_into* should contribute to avoiding repeatedly performing failed actions.
## Twc Performance
It would be more interesting if we take altogether to see if the learned rules allow us to plan optimal actions in the world. To answer this, we present our results in Table 1. By running our complete framework, we can quantitatively compare against the benchmarks in (Murugesan et al., 2021). We can also see the effects of the important components of our agent.
Table 1 compares seven different methods/configurations corresponding to each row. The first row is a deep-learning-only method which is the best from the original benchmark in (Murugesan et al., 2021). The second and third rows are model-free neuro-symbolic methods. The fourth row is a planning result without any learning by using an ideal world model (assumed given) and access to (noiseless) logical game states. This serves as an upper bound for comparison only since having access the the ideal model and states is difficult or impossible in other applications. The fifth row is a model-based RL method given the ideal game-engine facts (equivalent to a perfect, noiseless semantic parser). The sixth row is our model-based RL method with a practical AMRbased semantic parser but without proprioception.
Finally, the seventh row is our complete modelbased RL method with a practical AMR-based semantic parser and proprioception module (memorybased constraints). In summary, the first four rows serve as comparison points and the last three rows shows the result of our method. Note that we have additional assumptions differing from the deeplearning-only setting of the original setup and we note these in the table as what type of semantic parsing and handicap are used. The TWC games are categorized into Easy-Medium-Hard with a validation and testing set for each as shown in the columns.
Comparing the result of our full method (last row) against current methods (first 3 rows) shows a significant improvement across the board. This shows the strength of the model-based NeSA framework against purely deep learning methods or the previously published model-free NeSA.
To see the effect of each component we can compare the results of the last four rows. Comparing the model-based NeSA with ideal semantic parsing
(third to last row) against the planning upper bound, we can see that we can perfectly solve all except the test set of the hard games. After investigating,
| Semantic parsing | Handicap | Easy | Medium | Hard | | | | |
|---------------------------------------|-----------------|-------------------------------------|-----------------------|-----------------------|-----------------------|------------------------|------------------------|------------------------|
| Valid | Test | Valid | Test | Valid | Test | | | |
| (these are common) | | | | | | | | |
| TWC agent (DL-only) [AAAI 2021] | Word embedding | Admissible action | 17.65 ± 3.62 85% ± 7% | 18.00 ± 3.24 87% ± 5% | 37.18 ± 4.86 72% ± 7% | 43.08 ± 4.13 54% ± 17% | 49.36 ± 7.50 46% ± 10% | 49.96 ± 0.00 |
| Inventory | 22% ± 0% | | | | | | | |
| Curated Common Sense | | | | | | | | |
| Model-free NeSA based on [EMNLP 2021] | Skipped | Game-engine facts | - | 15.00 100% | - | 28.60 100% | - | - |
| Model-free NeSA (REINFORCE) | AMR-based facts | - | - | 32.28 ± 3.24 63% ± 5% | - | 43.68 ± 5.36 38% ± 25% | - | 49.48 ± 1.04 28% ± 13% |
| Planning (Model-based NeSA) | Skipped | Action transition Game-engine facts | 2.4 100% | 2.4 100% | 4.4 100% | 3.6 100% | 13.6 100% | 14.0 100% |
| (Learned action transition) | Skipped | Game-engine facts | 2.4 ± 0.0 100% | 2.4 ± 0.0 100% | 4.4 ± 0.0 100% | 3.6 ± 0.0 100% | 13.6 ± 0.0 100% | 28.4 ± 0.0 |
| Model-based NeSA | 60.6% | | | | | | | |
| Model-based NeSA w/o proprioception | AMR-based facts | - | 21.4 ± 0.0 57.1% | 21.2 ± 0.0 42.9% | 31.6 ± 0.0 38.5% | 31.6 ± 0.0 50.0% | 42.8 ± 0.0 20.6% | 42.8 ± 0.0 24.2% |
| Model-based NeSA w/ proprioception | AMR-based facts | - | 3.6 ± 0.0 100% | 4.0 ± 0.0 100% | 7.6 ± 0.0 100% | 5.6 ± 0.0 100% | 33.2 ± 0.0 64.7% | 42.8 ± 0.0 24.2% |
![4_image_0.png](4_image_0.png)
we found an interesting limitation wherein novel predicates appear in the test set that do not appear in any of the training or validation set. This is a current limitation of our system. Since we do not do any online learning during the test phase, there is no way to take these novel predicates into account. The significant effects of AMR-originated noise or lack of information can be seen by comparing the second-to-last and third-to-last rows. Here we see a significant degradation across metrics and datasets. However, the performance is still comparable or often better than the deep-learning-only benchmark of the first row. Comparing the last row (full method with proprioception module) and second-to-last row shows that we can recover most of the performance. We can also see that the metrics are competitive to the model-based approach from game-engine provided logical facts (3rd-last row). This shows the effectiveness of adding the proprioception module comprising both the memory and memory-based constraints.
## 5 Conclusion
We proposed a model-based RL agent for textbased games which comprises of a semantic parser producing logical states, a neuro-symbolic ILP
module for learning logical world models, and an off-the-shelf planning system to produce optimal actions in the game world. We augment this with a proprioception-inspired module comprising both the memory and memory-based constraints. Our results and experiments show that each of the components are essential and our model-based NeSA
agent outperforms previous benchmarks on the TextWorld Commonsense set of games.
## 6 Limitations
The experimental environment we used for testing our agents gives artificially generated natural language text, whose distribution of vocabulary, syntax, and semantic frames is controlled and limited to what the natural language text generators can provide. While we tried to include out of vocabulary for entities in our experiments, applying the proposed approach to natural language text in the wild, such as chatbots working with human, must be faced with issues such as out-of-vocabulary entities, relations, etc. We believe, however, approaching from controlled "wildness" is an important direction of the work for interactive-text agents.
The experiments and embodiment of the method presented here also makes some assumptions on the underlying model (MDP) of the environment.
These are discussed in the problem definition and methods (section 2 and 3). Perhaps the most important is the assumption that the environment can be sufficiently approximated with logical states. We also used a deterministic planner so highly stochastic environments are currently out-of-scope.
## References
Ashutosh Adhikari, Xingdi Yuan, Marc-Alexandre Côté, Mikuláš Zelinka, Marc-Antoine Rondeau, Romain Laroche, Pascal Poupart, Jian Tang, Adam Trischler, and Will Hamilton. 2020. Learning dynamic belief graphs to generalize on text-based games. In *Advances in Neural Information Processing Systems*,
volume 33, pages 3045–3057. Curran Associates, Inc.
Prithviraj Ammanabrolu and Mark Riedl. 2019. Playing text-adventure games with graph-based deep reinforcement learning. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3557–3565, Minneapolis, Minnesota.
Association for Computational Linguistics.
Subhajit Chaudhury, Prithviraj Sen, Masaki Ono, Daiki Kimura, Michiaki Tatsubori, and Asim Munawar.
2021. Neuro-symbolic approaches for text-based policy learning. In Conference on Empirical Methods in Natural Language Processing, pages 3073–3078.
Lin Cong, Hongzhuo Liang, Philipp Ruppel, Yunlei Shi, Michael Görner, Norman Hendrich, and Jianwei Zhang. 2022. Reinforcement learning with visionproprioception model for robot planar pushing. *Frontiers Neurorobotics*, 16:829437.
Marc-Alexandre Côté, Ákos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Ruo Yu Tao, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam Trischler.
2018. Textworld: A learning environment for textbased games. *CoRR*, abs/1806.11532.
Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, and Ramón Fernandez Astudillo. 2022. Inducing and using alignments for transition-based AMR parsing.
CoRR, abs/2205.01464. To appear in NAACL-22.
Richard Evans and Edward Grefenstette. 2018. Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61:1–64.
Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. 2019. Learning latent dynamics for planning from pixels. In International conference on machine learning, pages 2555–2565. PMLR.
Matthew Hausknecht, Prithviraj Ammanabrolu, MarcAlexandre Côté, and Xingdi Yuan. 2020. Interactive fiction games: A colossal adventure. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7903–7910.
Malte Helmert. 2006. The fast downward planning system. *Journal of Artificial Intelligence Research*,
26:191–246.
Daiki Kimura, Subhajit Chaudhury, Masaki Ono, Michiaki Tatsubori, Don Joven Agravante, Asim Munawar, Akifumi Wachi, Ryosuke Kohita, and Alexander Gray. 2021a. LOA: Logical optimal actions for textbased interaction games. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 227–231.
Daiki Kimura, Masaki Ono, Subhajit Chaudhury, Ryosuke Kohita, Akifumi Wachi, Don Joven Agravante, Michiaki Tatsubori, Asim Munawar, and Alexander Gray. 2021b. Neuro-symbolic reinforcement learning with first-order logic. In *Conference* on Empirical Methods in Natural Language Processing, pages 3505–3511.
Tobias Lang, Marc Toussaint, and Kristian Kersting.
2012. Exploration in relational domains for modelbased reinforcement learning. Journal of Machine Learning Research, 13(119):3725–3768.
Stephen Muggleton and Luc De Raedt. 1994. Inductive logic programming: Theory and methods. The Journal of Logic Programming, 19:629–679.
Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Pushkar Shukla, Sadhana Kumaravel, Gerald Tesauro, Kartik Talamadupula, Mrinmaya Sachan, and Murray Campbell. 2021. Text-based rl agents with commonsense knowledge: New challenges, environments and baselines. *Proceedings of the AAAI*
Conference on Artificial Intelligence, 35(10):9018–
9027.
Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 1–11, Lisbon, Portugal. Association for Computational Linguistics.
Ryan Riegel, Alexander G. Gray, Francois P. S. Luus, Naweed Khan, Ndivhuwo Makondo, Ismail Yunus Akhalwaya, Haifeng Qian, Ronald Fagin, Francisco Barahona, Udit Sharma, Shajith Ikbal, Hima Karanam, Sumit Neelam, Ankita Likhyani, and Santosh K. Srivastava. 2020. Logical neural networks.
CoRR, abs/2006.13155.
Prithviraj Sen, Breno W. S. R. de Carvalho, Ryan Riegel, and Alexander G. Gray. 2021. Neuro-symbolic inductive logic programming with logical neural networks.
CoRR, abs/2112.03324.
Faraz Torabi, Garrett Warnell, and Peter Stone. 2019.
Imitation learning from video by leveraging proprioception. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3585–3591. ijcai.org.
John C. Tuthill and Eiman Azim. 2018. Primer: Proprioception. *Current Biology*, 28:R187–R207.
Łukasz Kaiser, Mohammad Babaeizadeh, Piotr Miłos, Błazej Osi ˙ nski, Roy H Campbell, Konrad ´
Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, and Henryk Michalewski. 2020. Model based reinforcement learning for atari.
In *International Conference on Learning Representations*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
yang-etal-2023-domain | In and Out-of-Domain Text Adversarial Robustness via Label Smoothing | https://aclanthology.org/2023.acl-short.58 | Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial attacks, where the predictions of a model can be drastically altered by slight modifications to the input (such as synonym substitutions). While several defense techniques have been proposed, and adapted, to the discrete nature of text adversarial attacks, the benefits of general-purpose regularization methods such as label smoothing for language models, have not been studied. In this paper, we study the adversarial robustness provided by label smoothing strategies in foundational models for diverse NLP tasks in both in-domain and out-of-domain settings. Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks. We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples. | # In And Out-Of-Domain Text Adversarial Robustness Via Label Smoothing
Yahan Yang∗
University of Pennsylvania [email protected] Dan Roth University of Pennsylvania [email protected]
## Abstract
Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial attacks, where the predictions of a model can be drastically altered by slight modifications to the input (such as synonym substitutions). While several defense techniques have been proposed, and adapted, to the discrete nature of text adversarial attacks, the benefits of general-purpose regularization methods such as label smoothing for language models, have not been studied.
In this paper, we study the adversarial robustness provided by label smoothing strategies in foundational models for diverse NLP tasks in both in-domain and out-of-domain settings.
Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks. We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples.
## 1 Introduction
Neural networks are vulnerable to adversarial attacks: small perturbations to the input ,which do not fool humans (Szegedy et al., 2013; Goodfellow et al., 2014; Madry et al., 2017). In NLP
tasks, previous studies (Alzantot et al., 2018; Jin et al., 2019; Li et al., 2020; Garg and Ramakrishnan, 2020) demonstrate that simple word-level text attacks (synonym substitution, word insertion/deletion) easily fool state-of-the-art models, including pre-trained transformers like BERT (Devlin et al., 2019; Wolf et al., 2020). Further, it has recently been shown models are overconfident1 on examples which are easy to attack (Qin et al., 2021)
and indeed, such over-confident predictions plague Soham Dan∗
IBM Research [email protected] Insup Lee University of Pennsylvania [email protected] much of modern deep learning (Kong et al., 2020; Guo et al., 2017; Nguyen et al., 2015; Rahimi et al., 2020). Label smoothing is a regularization method that has been proven effective in a variety of applications, and modalities (Szegedy et al., 2016; Chorowski and Jaitly, 2017; Vaswani et al., 2017).
Importantly, it has been shown to reduce overconfident predictions and produce better confidence calibrated classifiers (Muller et al., 2019; Zhang et al., 2021; Dan and Roth, 2021; Desai and Durrett, 2020; Huang et al., 2021; Liu and JaJa, 2020).
In this work, we focus on the question: does label smoothing also implicitly help in adversarial robustness? While there has been some investigation in this direction for adversarial attacks in computer vision, (Fu et al., 2020; Goibert and Dohmatob, 2019; Shafahi et al., 2019), there is a gap in understanding of whether it helps with discrete, text adversarial attacks used against NLP
systems. With the increasing need for robust NLP
models in safety-critical applications and a lack of generic robustness strategies,2there is a need to understand inherent robustness properties of popular label smoothing strategies, and the interplay between confidence and robustness of a model.
In this paper, we extensively study standard label smoothing and its adversarial variant, covering robustness, prediction confidence, and domain transfer properties. We observe that label smoothing provides implicit robustness against adversarial examples. Particularly, we focus on pre-trained transformer models and test robustness under various kinds of black-box and white-box word-level adversarial attacks, in both in-domain and out-ofdomain scenarios. Our experiments show that label smoothing (1) improves robustness to text adversarial attacks (both black-box and white-box), and
(2) mitigates over-confident errors on adversarial textual examples. Analysing the adversarial exam2which are flexible, simple and not over-specialized to very specific kinds of text adversarial attacks.
ples along various quality dimensions reveals the remarkable efficacy of label smoothing as a simple add-on robustness and calibration tool.
## 2 Background 2.1 Text Adversarial Attacks
Our experiments evaluate the robustness of text classification models under three state-of-the-art text adversarial attacks TextFooler (black-box),
BAE (black-box) and SemAttack (white-box),
described below.3For a particular victim NLP
model and a raw text input, the attack produces semantically-similar adversarial text as output. Importantly, only those examples are attacked, which are originally correctly predicted by the victim model. The attacks considered are word-level, i.e.
they replace words in a clean text with their synonyms to maintain the meaning of the clean text, but change the prediction of the victim models.
- **TextFooler (TF)**: (Jin et al., 2019) proposes an attack which determines the word importance in a sentence, and then replaces the important words with qualified synonyms.
- BAE: (Garg and Ramakrishnan, 2020) uses masked pre-trained language models to generate replacements for the important words until the victim model's prediction is incorrect.
- **SemAttack (SemAtt)**: (Wang et al., 2022)
introduces an attack to search perturbations in the contextualized embedding space by formulating an optimization problem as in (Carlini and Wagner, 2016). We specifically use the white-box word-level version of this attack.
## 2.2 Label Smoothing
Label Smoothing is a modified fine-tuning procedure to address overconfident predictions. It introduces uncertainty to smoothen the posterior distribution over the target labels. Label smoothing has been shown to implicitly calibrate neural networks on out-of-distribution data, where *calibration* measures how well the model confidences are aligned with the empirical likelihoods (Guo et al., 2017).
a new target vector (y LS
i) from the one-hot target vector (yi), where y LS
i = (1 − α)yi +
α/K for a K class classification problem. α is a hyperparameter selection and its range is from 0 to 1.
- **Adversarial Label Smoothing (ALS)** (Goibert and Dohmatob, 2019) constructs a new target vector (y ALS
i) with a probability of 1 − α on the target label and α on the label to which the classification model assigns the minimum softmax scores, thus introducing uncertainty.
For both LS and ALS, the cross entropy loss is subsequently minimized between the model predictions and the modified target vectors y LS
i, yALS
i.
## 3 Experiments
In this section, we present a thorough empirical evaluation on the effect of label smoothing on adversarial robustness for two pre-trained transformer models: BERT and its distilled variant, dBERT,
which are the victim models. 4 We attack the victim models using TF, BAE, and SemAttack. For each attack, we present results on both the standard models and the label-smoothed models on various classification tasks: text classification and natural language inference. For each dataset we evaluate on a randomly sampled subset of the test set (1000 examples), as done in prior work (Li et al., 2021; Jin et al., 2019; Garg and Ramakrishnan, 2020). We evaluate on the following tasks, and other details about the setting is in Appendix A.8:
- **Text Classification**: We evaluate on movie review classification using Movie Review (MR)
(Pang and Lee, 2005) and Stanford Sentiment Treebank (SST2) (Socher et al., 2013) (both binary classification), restaurant review classification: Yelp Review (Zhang et al., 2015a)
(binary classification), and news category classification: AG News (Zhang et al., 2015b)
(having the following four classes: World, Sports, Business, Sci/Tech).
- **Natural Language Inference:** We investigate two datasets for this task: the Stanford Natural Language Inference Corpus (SNLI) (Bowman et al., 2015) and the Multi-Genre Natural Language Inference corpus (MNLI) (Williams et al., 2018), both having three classes. For MNLI, our work only evaluates performance 4Additional results on more datasets, models, other attacks and α values, are presented in the Appendix.
| SST-2 | Clean | Attack Success | Adv | | | | | | | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------|-------|-------|-------|-------|---------|-------|----------------|-----|
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | | | |
| BERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 | | | | |
| TF | 91.97 | 92.09 | 96.38 | 88.92 | 78.43 | 63.62 | | | | |
| BAE | 91.97 | 92.09 | 57.11 | 53.42 | 86.92 | 68.35 | | | | |
| SemAtt | 91.97 | 92.09 | 86.41 | 54.05 | 80.12 | 64.55 | | | | |
| dBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 | | | | |
| TF | 89.56 | 89.68 | 96.29 | 89.77 | 76.28 | 61.60 | | | | |
| BAE | 89.56 | 89.68 | 59.28 | 56.52 | 83.55 | 66.11 | | | | |
| SemAtt | 89.56 | 89.68 | 91.68 | 69.69 | 78.93 | 62.42 | AG_news | Clean | Attack Success | Adv |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | | | |
| BERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 | | | | |
| TF | 94.83 | 94.67 | 88.26 | 77.47 | 59.02 | 42.46 | | | | |
| BAE | 94.83 | 94.67 | 74.83 | 62.82 | 60.66 | 43.98 | | | | |
| SemAtt | 94.83 | 94.67 | 52.65 | 30.49 | 62.32 | 44.99 | | | | |
| dBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 | | | | |
| TF | 94.73 | 94.47 | 90.11 | 74.52 | 57.60 | 41.40 | | | | |
| BAE | 94.73 | 94.47 | 77.79 | 63.65 | 60.01 | 42.74 | | | | |
| SemAtt | 94.73 | 94.47 | 52.07 | 34.05 | 60.40 | 43.27 | | | | |
| SNLI | Clean | Attack Success | Adv | | | | | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | | | |
| BERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 | | | | |
| TF | 89.56 | 89.23 | 96.5 | 96.15 | 68.27 | 52.61 | | | | |
| BAE | 89.56 | 89.23 | 74.95 | 74.82 | 76.13 | 57.42 | | | | |
| SemAtt | 89.56 | 89.23 | 99.11 | 91.94 | 75.41 | 58.01 | | | | |
| dBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 | | | | |
| TF | 87.27 | 87.1 | 98.12 | 96.86 | 65.19 | 50.80 | | | | |
| BAE | 87.27 | 87.1 | 74.08 | 72.91 | 72.89 | 55.49 | | | | |
| SemAtt | 87.27 | 87.1 | 98.43 | 92.84 | 71.17 | 54.96 | | | | |
| Table 1: Comparison of standard models and models fine-tuned with standard label smoothing techniques (LS) Yelp Clean Attack Success Adv Acc (↑) Rate (↓) Conf (↓) BERT(α) 0 0.45 0 0.45 0 0.45 TF 97.73 97.7 99.32 92.90 64.85 55.36 BAE 97.73 97.7 55.35 45.14 68.28 57.38 SemAtt 97.73 97.7 93.55 36.17 74.53 60.24 dBERT(α) 0 0.45 0 0.45 0 0.45 TF 97.47 97.4 99.45 93.36 61.75 54.63 BAE 97.47 97.4 58.14 45.59 64.27 57.14 SemAtt 97.47 97.4 97.37 43.92 71.34 60.57 | | | | | | | | | | |
## 3.1 In-Domain Setting
In the in-domain setting (iD), the pre-trained transformer models are fine-tuned on the train-set for each task and evaluated on the corresponding testset. For each case, we report the clean accuracy, the adversarial attack success rate (percentage of misclassified examples after an attack) and the average confidence on successfully attacked examples (on which the model makes a wrong prediction).5 Table 1 shows the performance of BERT and dBERT,
with and without label-smoothing. We choose label smoothing factor α = 0.45 for standard labelsmoothed models in our experiments.
We see that label-smoothed models are more robust for every adversarial attack across different datasets in terms of the attack success rate, which is a standard metric in this area (Li et al., 2021; Lee et al., 2022). Additionally, the higher confidence of the standard models on the successfully attacked examples indicates that label smoothing helps mitigate overconfident mistakes in the adversarial setting. Importantly, the clean accuracy remains almost unchanged in all the cases. Moreover, we observe that the models gain much more robustness from LS under white-box attack, compared 5Details of each metric are presented in Appendix A.2.
to the black-box setting. We perform hyperparameter sweeping for the label smoothing factor α to investigate their impact to model accuracy and adversarial robustness. Figure 1 shows that the attack success rate gets lower as we increase the label smooth factor when fine-tuning the model while the test accuracy is comparable6. However, when the label smoothing factor is larger than 0.45, there is no further improvement on adversarial robustness in terms of attack success rate. Automatic search for an optimal label smoothing factor and its theoretical analysis is important future work.
![2_image_0.png](2_image_0.png)
| SNLI | Clean | Attack Success | Adv | | | |
|---------|----------|------------------|-------|-------|-------|-------|
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| BERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 89.56 | 88.5 | 96.5 | 96.5 | 68.27 | 41.22 |
| BAE | 89.56 | 88.5 | 74.95 | 74.87 | 76.13 | 44.93 |
| SemAtt | 89.56 | 88.5 | 99.11 | 91.53 | 75.41 | 44.97 |
| AG_news | Clean | Attack Success | Adv | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| BERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 94.83 | 94.37 | 88.26 | 77.74 | 59.02 | 32.87 |
| BAE | 94.83 | 94.37 | 74.83 | 64.15 | 60.66 | 33.45 |
| SemAtt | 94.83 | 94.37 | 52.65 | 27.13 | 62.32 | 34.72 |
## 3.2 Out-Of-Domain Setting
We now evaluate the benefits of label smoothing for robustness in the out-of-domain (OOD) setting, where the pre-trained model is fine-tuned on a particular dataset and is then evaluated directly on a different dataset, which has a matching label space.
Three examples of these that we evaluate on are the Movie Reviews to SST-2 transfer, the SST-2 to Yelp transfer, and the SNLI to MNLI transfer.
In Table 3, we again see that label-smoothing
| MR→SST2 | Clean | Attack Success | Adv | | | |
|------------------------------------------------------|----------------|------------------|-------|-------|-------|-------|
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| BERT (α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 90.71 | 91.06 | 90.9 | 90.93 | 69.47 | 58.41 |
| BAE | 90.71 | 91.06 | 62.83 | 63.1 | 75.2 | 62.6 |
| SemAtt | 90.71 | 91.06 | 82.68 | 76.07 | 67.64 | 57.9 |
| dBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 88.19 | 88.99 | 94.28 | 94.59 | 64.95 | 57.2 |
| BAE | 88.19 | 88.99 | 65.41 | 65.72 | 71.89 | 61.5 |
| SemAtt | 88.19 | 88.99 | 88.56 | 86.21 | 66.51 | 58.14 |
| SNLI→MNLI Clean | Attack Success | Adv | | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| BERT (α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 73.4 | 72.1 | 94.82 | 92.79 | 58.04 | 46.43 |
| BAE | 73.4 | 72.1 | 82.56 | 80.72 | 63.00 | 49.45 |
| SemAtt | 73.4 | 72.1 | 99.73 | 98.75 | 60.32 | 47.35 |
| dBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 65.4 | 62.1 | 94.50 | 92.59 | 54.54 | 44.81 |
| BAE | 65.4 | 62.1 | 77.68 | 75.52 | 58.88 | 47.83 |
| SemAtt | 65.4 | 62.1 | 99.39 | 96.78 | 57.10 | 45.43 |
| SST-2 → Yelp | Clean | Attack Success | Adv | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| BERT (α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 92.5 | 92.4 | 99.57 | 98.27 | 60.80 | 54.28 |
| BAE | 92.5 | 92.4 | 63.68 | 60.71 | 64.27 | 55.66 |
| SemAtt | 92.5 | 92.4 | 95.80 | 68.17 | 68.37 | 57.45 |
| dBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 91.7 | 91.1 | 99.78 | 98.02 | 59.12 | 53.30 |
| BAE | 91.7 | 91.1 | 68.70 | 63.45 | 61.37 | 54.21 |
| SemAtt | 91.7 | 91.1 | 99.02 | 82.15 | 67.01 | 57.37 |
| Table 3: Comparison of standard models and LS models | | | | | | |
helps produce more robust models in the OOD setting although with less gain compared to iD setting.
This is a challenging setting, as evidenced by the significant performance drop in the clean accuracy as compared to the in-domain setting. We also see that the standard models make over-confident errors on successfully attacked adversarial examples, when compared to label-smoothed models.
## 3.3 Qualitative Results
In this section, we try to understand how the generated adversarial examples differ for label smoothed and standard models. First we look at some qualitative examples: in Table 4, we show some examples
(clean text) for which the different attack schemes fails to craft an attack for the label smoothed model but successfully attacks the standard model.
Table 4: Examples for which an attack could be found for the standard model but not for the label smoothed model. The Victim column shows the dataset and the pretrained model (dBERT denotes distilBERT).
We also performed automatic evaluation of the quality of the adversarial examples for standard and label smoothed models, adopting standard metrics from previous studies (Jin et al., 2019; Li et al.,
2021). Ideally, we want the adversarial sentences to be free of grammar errors, fluent, and semantically similar to the clean text. This can be quantified using metrics such as grammar errors, perplexity, and similarity scores (compared to the clean text).
The reported scores for each metric are computed over only the successful adversarial examples, for each attack and model type.7
| SST-2 | Perplexity (↑) | Similarity | Grammar | | | |
|-----------|------------------|--------------|-----------|-------------|------|------|
| Score (↓) | Error (↑) | | | | | |
| BERT (α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 400.31 447.58 | 0.800 | 0.779 | 0.33 | 0.38 | |
| BAE | 300.74 305.28 | 0.867 | 0.855 | −0.05 −0.04 | | |
| AG_News | Perplexity (↑) | Similarity | Grammar | | | |
| Score (↓) | Error (↑) | | | | | |
| BERT (α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 342.02 355.87 | 0.782 | 0.772 | 1.37 | 1.40 | |
| BAE | 169.37 170.73 | 0.851 | 0.845 | 0.97 | 1.00 | |
| Victim Attack Text SST2 BAE clean | at once half-baked and overheated. | | |
|-------------------------------------|--------------------------------------|-------------------------------|----------------|
| BERT | adv | at once warm and overheated . | |
| MR | TF | clean | no surprises . |
| dBERT | adv | no surprise . | |
Table 5 shows that the quality of generated adversarial examples on label smoothed models is worse than those on standard models for different metrics, suggesting that the adversarial sentences generated by standard models are easier to perceive. This further demonstrates that label smoothing makes it harder to find adversarial vulnerabilities.
## 4 Conclusion
We presented an extensive empirical study to investigate the effect of label smoothing techniques on adversarial robustness for various NLP tasks, for various victim models and adversarial attacks. Our results demonstrate that label smoothing imparts implicit robustness to models, even under domain shifts. This first work on the effects of LS for text adversarial attacks, complemented with prior work on LS and implicit calibration (Desai and Durrett, 2020; Dan and Roth, 2021), is an important step towards developing robust, reliable models. In the future, it would be interesting to explore the combination of label smoothing with other regularization and adversarial training techniques to further enhance the adversarial robustness of NLP models.
## 5 Limitations
One limitation of our work is that we focus on robustness of pre-trained transformer language models against word-level adversarial attacks, which is the most common setting in this area. Future work could extend this empirical study to other types of attacks (for example, character-level and sentence-level attacks) and for diverse types of architectures. Further, it will be very interesting to theoretically understand how label smoothing provides (1) the implicit robustness to text adversarial attacks and (2) mitigates over-confident predictions on the adversarially attacked examples.
## 6 Ethics Statement
Adversarial examples present a severe risk to machine learning systems, especially when deployed in real-world risk sensitive applications. With the ubiquity of textual information in real-world applications, it is extremely important to defend against adversarial examples and also to understand the robustness properties of commonly used techniques like Label Smoothing. From a societal perspective, by studying the effect of this popular regularization strategy, this work empirically shows that it helps robustness against adversarial examples in in-domain and out-of-domain scenarios, for both white-box and black-box attacks across diverse tasks and models. From an ecological perspective, label smoothing does not incur any additional computational cost over standard fine-tuning emphasizing its efficacy as a general-purpose tool to improve calibration and robustness.
## Acknowledgements
Research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-20-1-0080. This work was supported by Contract FA8750-19-2-0201 with the US Defense Advanced Research Projects Agency (DARPA).
The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense, the Army Research Office or the U.S. Government. This research was also supported by a gift from AWS AI for research in Trustworthy AI.
## References
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang.
2018. Generating natural language adversarial examples. *arXiv preprint arXiv:1804.07998*.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Nicholas Carlini and David Wagner. 2016. Towards evaluating the robustness of neural networks.
Jan Chorowski and Navdeep Jaitly. 2017. Towards better decoding and language model integration in sequence to sequence models. *Proc. Interspeech 2017*,
pages 523–527.
Soham Dan and Dan Roth. 2021. On the Effects of Transformer Size on In- and Out-of-Domain Calibration. In Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 295–302.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Chaohao Fu, Hongbin Chen, Na Ruan, and Weijia Jia.
2020. Label smoothing and adversarial robustness.
arXiv preprint arXiv:2009.08233.
Siddhant Garg and Goutham Ramakrishnan. 2020. Bae:
Bert-based adversarial examples for text classification. *arXiv preprint arXiv:2004.01970*.
Morgane Goibert and Elvis Dohmatob. 2019. Adversarial robustness via adversarial label-smoothing.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In *International Conference on Machine* Learning, pages 1321–1330. PMLR.
Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. 2021. Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733.
Shuangping Huang, Yu Luo, Zhenzhou Zhuang, JinGang Yu, Mengchao He, and Yongpan Wang. 2021.
Context-aware selective label smoothing for calibrating sequence recognition model. In Proceedings of the 29th ACM International Conference on Multimedia, pages 4591–4599.
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural language attack on text classification and entailment.
arXiv preprint arXiv:1907.11932.
Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, and Chao Zhang. 2020. Calibrated language model fine-tuning for in-and out-ofdistribution data. *arXiv preprint arXiv:2010.11506*.
Deokjae Lee, Seungyong Moon, Junhyeok Lee, and Hyun Oh Song. 2022. Query-efficient and scalable black-box adversarial attacks on discrete sequential data via bayesian optimization. In *International Conference on Machine Learning*, pages 12478–12497.
PMLR.
Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2021. Contextualized perturbation for textual adversarial attack.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5053–5069, Online. Association for Computational Linguistics.
Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. Bert-attack: Adversarial attack against bert using bert. arXiv preprint arXiv:2004.09984.
Chihuang Liu and Joseph JaJa. 2020. Class-similarity based label smoothing for generalized confidence calibration. In *arXiv preprint arXiv: 2006.14028*.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. *arXiv preprint* arXiv:1711.05101.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017.
Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*.
John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp.
Rafael Muller, Simon Kornblith, and Geoffrey E Hinton.
2019. When does label smoothing help? Advances in neural information processing systems, 32.
Anh Nguyen, Jason Yosinski, and Jeff Clune. 2015.
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436.
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In *Proceedings of the ACL*.
Yao Qin, Xuezhi Wang, Alex Beutel, and Ed Chi. 2021.
Improving calibration through the relationship with adversarial robustness. *Advances in Neural Information Processing Systems*, 34:14358–14369.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Amir Rahimi, Amirreza Shaban, Ching-An Cheng, Richard Hartley, and Byron Boots. 2020. Intra orderpreserving functions for calibration of multi-class neural networks. *Advances in Neural Information* Processing Systems, 33:13456–13467.
Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B
Viegas, Andy Coenen, Adam Pearce, and Been Kim.
2019. Visualizing and measuring the geometry of bert. *Advances in Neural Information Processing* Systems, 32.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing. Association for Computational Linguistics.
Ali Shafahi, Amin Ghiasi, Furong Huang, and Tom Goldstein. 2019. Label smoothing and logit squeezing: a replacement for adversarial training? *arXiv* preprint arXiv:1910.11585.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, and Bo Li. 2022. SemAttack: Natural textual attacks via different semantic spaces. In *Findings of the Association for Computational Linguistics: NAACL 2022*,
pages 176–205, Seattle, United States. Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long Papers), pages 1112–1122. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Chang-Bin Zhang, Peng-Tao Jiang, Qibin Hou, Yunchao Wei, Qi Han, Zhen Li, and Ming-Ming Cheng.
2021. Delving deep into label smoothing. IEEE
Transactions on Image Processing, 30:5984–5996.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015a.
Character-level Convolutional Networks for Text Classification. *arXiv:1509.01626 [cs]*.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015b.
Character-level convolutional networks for text classification. In *NIPS*.
## A Appendix
- A.1 Pictorial Overview of the Adversarial Attack Framework
- A.2 Description of the Evaluation Metrics
- A.3 Details of Automatic Attack Evaluation
- A.4 Additional results on Movie Review Dataset
- A.5 Additional white-box attack on labelsmoothed models
- A.6 Additional results for α = 0.1
- A.7 Additional results on ALBERT model
- A.8 Dataset overview and expertiment details
- A.9 Attack success rate versus label smoothing factors for different attacks (TextFooler and SemAttack)
- **A.10** Average number of word change versus Confidence
## A.1 Overview Of The Framework
![6_Image_0.Png](6_Image_0.Png)
Figure 2: Here we show an example generated by wordlevel adversarial attack TextFooler (Jin et al., 2019) on SST-2 data. By replacing excitement with its synonym exhilaration, the text classification model changes its prediction from Negative to Positive, which is incorrect.
## A.2 Evaluation Metrics
The followings are the details of evaluation metrics from previous works (Lee et al., 2022; Li et al.,
2021):
Clean accuracy = \# of correctly predicted clean examples
\# of clean examples Attack Succ. Rate = \# of successful adversarial examples
\# of correctly predicted clean examples where successful adversarial examples are derived from correctly predicted examples Adv Conf = sum of confidence of successful adv examples
\# of successful adversarial examples 663
## A.3 Attack Evaluation
We performed automatic evaluation of adversarial attacks against standard models and label smoothed models following previous studies (Jin et al., 2019; Li et al., 2021). Following are the details of the metrics we used in Table 5:
Perplexity evaluates the fluency of the input using language models. We use GPT-2 (Radford et al.,
2019) to compute perplexity as in (Li et al., 2021) .
Similarity Score determines the similarity between two sentences. We use Sentence Transformers
(Reimers and Gurevych, 2019) to compute sentence embeddings and then calculate cosine similarity score between the clean examples and the corresponding adversarially modified examples.
Grammar Error The average grammar error increments between clean examples and the corresponding adversarially modified example.8
## A.4 Additional Results On Movie Review Dataset
Here we provide results of movie review datasets
(Pang and Lee, 2005) under in-domain setting.
| MR | Clean | Attack Success | | | | |
|---------------|----------|------------------|-------|-------|-------------|-------|
| Acc (↑) | Rate (↓) | | | | | |
| BERT (α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TextFooler | 84.4 | 83.7 | 92.54 | 92.0 | 67.93 58.33 | |
| BAE | 84.4 | 83.7 | 62.09 | 61.17 | 74.33 62.4 | |
| SemAtt | 84.4 | 83.7 | 83.18 | 76.34 | 68.8 | 58.18 |
| distilBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TextFooler | 82.3 | 82.6 | 94.9 | 95.88 | 64.64 57.17 | |
| BAE | 82.3 | 82.6 | 67.31 | 67.19 | 70.54 60.88 | |
| SemAtt | 82.3 | 82.6 | 90.16 | 87.77 | 65.55 57.33 | |
## A.5 Additional Results On An Additional White-Box Attack
In this section, we use another recent popular whitebox attack named Gradient-based Attack (Guo et al., 2021). This is a gradient-based approach that searches for a parameterized word-level adversarial attack distribution, and then samples adversarial examples from the distribution. We run this attack on standard and label smoothed BERT models and the results are listed below.
We observe that the label smoothing also help with adversarial robustness against this attack 8we use https://pypi.org/project/
language-tool-python/ to compute grammar error.
| Grad | Clean | Attack Succ | | | | |
|-------------|-------------------------|---------------|-------------|-------------|-------------|------|
| Attack | Acc (↑) | Rate(↓) | | | | |
| BERT (α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| SST-2 | 91.97 92.09 98.38 82.94 | 98.75 76.35 | | | | |
| AG_news | 94.9 | 94.8 | 98.63 68.88 | 95.35 63.25 | | |
| Yelp | 95.3 | 95.5 | 99.90 87.02 | 99.24 76.52 | | |
| SNLI | 89.7 | 90.2 | 96.1 | 86.36 | 59.99 37.28 | |
| SST2 → Yelp | 88.6 | 88.4 | 99.89 94.84 | 98.37 77.52 | | |
across four datasets under iD setting. The results also show that, similar to SemAttack, the gradbased attack benefits more from label smoothing compared to black-box attacks like TextFooler and BAE.
## A.6 Additional Results Of Α = 0.1
Table 8 and 9 are the additional results to show when label smoothing α = 0.1, how the adversarial robustness of fine-tuned language models changes under iD and OOD scenarios.
Table 10 are the additional results for adversarial label smoothing α = 0.1.
## A.7 Additional Results On Albert
In this section, we include experiment results for standard ALBERT and label smoothed ALBERT
in Table 11. We observe that the label smoothing technique also improves adversarial robustness of ALBERT model across different datasets.
Table 11: Comparison of standard models and label smoothed models against TextFooler and BAE attacks for ALBERT model.
| Attack Success | Adv | | | | | |
|------------------|-------------------------|-------------|-------------|-------------|----|------|
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| α | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 92.66 92.78 94.68 90.73 | 76.29 65.63 | | | | |
| BAE | 92.66 92.78 60.15 65.02 | 83.67 70.17 | | | | |
| Attack Success | Adv | | | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| α | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 94.9 | 94.5 | 77.66 56.72 | 58.78 42.59 | | |
| BAE | 94.9 | 94.5 | 65.54 49.74 | 59.98 43.79 | | |
| Attack Success | Adv | | | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| α | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TF | 90.1 | 90.3 | 94.89 93.69 | 69.66 53.67 | | |
| BAE | 90.1 | 90.3 | 76.91 75.86 | 75.05 56.42 | | |
| Attack Success | Adv | | | | | | | |
|----------------------------------------------------|------------------------------------------|----------------|--------|-------------|-------------------------------|----------------|----------------|------|
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | |
| BERT (α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 91.97 92.2 | 96.38 | 94.4 | 78.43 74.39 | | | | |
| BAE | 91.97 92.2 | 57.11 | 55.22 | 86.92 82.29 | | | | |
| distilBERT(α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 89.56 89.68 | 96.29 | 95.14 | 76.28 70.77 | | | | |
| BAE | 89.56 89.68 | 59.28 | 58.44 | 83.55 78.16 | SNLI | Clean | Attack Success | Adv |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | |
| BERT(α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 89.56 | 90.4 | 96.5 | 95.02 | 68.27 | 67.54 | | |
| BAE | 89.56 | 90.4 | 74.95 | 75.96 | 76.13 | 73.83 | | |
| AG_news | Clean | Attack Success | Adv | | | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | |
| BERT(α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 94.83 | 94.6 | 88.26 | 85.27 | 59.02 | 53.17 | | |
| BAE | 94.83 | 94.6 | 74.83 | 69.1 | 60.66 | 54.99 | | |
| Attack Success | Adv | | | | | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | |
| BERT (α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 94.83 95.0 | 88.26 | 78.39 | 59.02 55.17 | | | | |
| BAE | 94.83 95.0 | 74.83 | 65.58 | 60.66 56.24 | | | | |
| distilBERT(α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 94.73 94.53 | 90.11 | 81.66 | 57.6 | 53.43 | | | |
| BAE | 94.73 94.53 | 74.83 | 67.7 | 60.01 54.64 | Table 10: Comparison of standard models versus models trained with ALS against various attacks on SNLI and AG_news. ↑ (↓) denotes higher (lower) is better respectively. | | | |
| Attack Success | Adv | | | | | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | |
| BERT (α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 97.73 97.77 | 99.32 | 97.99 | 64.85 63.18 | | | | |
| BAE | 97.73 97.77 | 55.35 | 52.88 | 68.28 66.28 | | | | |
| distilBERT(α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 97.47 97.5 | 99.45 | 98.91 | 61.75 60.35 | | | | |
| BAE | 97.47 97.5 | 58.14 | 51.86 | 64.27 63.04 | Dataset | No. of classes | Train/Test | Avg. |
| size | Length | | | | | | | |
| MR | 2 | 8530/1066 | 18.64 | | | | | |
| SST-2 | 2 | 6.7e4/872 | 17.4 | | | | | |
| Yelp | 2 | 5.6e5/3.8e4 | 132.74 | | | | | |
| AG_news | 4 | 1.2e5 /7600 | 38.68 | | | | | |
| SNLI | 3 | 5.5e5 /1e4 | 22.01 | | | | | |
| MNLI | 3 | 3.9e5/ 9815 | 28.96 | | | | | |
| Attack Success | Adv | | | | | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | | | |
| BERT (α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 89.56 88.87 | 96.5 | 96.74 | 68.83 64.96 | | | | |
| BAE | 89.56 88.87 | 74.95 | 75.1 | 76.13 72.65 | | | | |
| distilBERT(α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 | | |
| TF | 87.27 87.03 | 98.12 | 96.94 | 65.19 62.41 | | | | |
| BAE | 87.27 87.03 | 74.08 | 73.82 | 72.89 69.57 | Table 12: Summary of datasets | | | |
| A.8 | Dataset Overview and Experiments Details | | | | | | | |
| We use Huggingface (Wolf et al., 2020) to load the | | | | | | | | |
| SNLI → MNLI | Clean | Attack Success | Adv | | | |
|---------------|----------|------------------|-------|-------|-------------|-------|
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| BERT (α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 |
| TextFooler | 73.4 | 71.9 | 94.82 | 94.85 | 58.04 48.56 | |
| BAE | 73.4 | 71.9 | 82.56 | 77.19 | 63 | 49.3 |
| distilBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TextFooler | 65.4 | 65.2 | 94.5 | 94.17 | 54.54 52.63 | |
| BAE | 65.4 | 65.2 | 77.68 | 75.15 | 58.88 56.16 | |
| SST-2 → Yelp | Clean | Attack Success | Adv | | | |
| Acc (↑) | Rate (↓) | Conf (↓) | | | | |
| BERT (α) | 0 | 0.1 | 0 | 0.1 | 0 | 0.1 |
| TextFooler | 92.5 | 92.0 | 99.57 | 99.13 | 60.8 | 58.13 |
| BAE | 92.5 | 92.0 | 63.68 | 63.37 | 64.27 60.63 | |
| distilBERT(α) | 0 | 0.45 | 0 | 0.45 | 0 | 0.45 |
| TextFooler | 91.7 | 91.4 | 99.78 | 99.34 | 59.12 56.42 | |
| BAE | 91.7 | 91.4 | 68.7 | 67.07 | 61.37 57.73 | |
A.8 Dataset Overview and Experiments Details We use Huggingface (Wolf et al., 2020) to load the dataset and to fine-tune the pre-trained models. All models are fine-tuned for 3 epochs using AdamW
optimizer (Loshchilov and Hutter, 2017) and the learning rate starts from 5e − 6. The training and attacking are run on an NVIDIA Quadro RTX 6000 GPU (24GB). For both BAE and Textfooler attack, we use the implementation in TextAttack (Morris et al., 2020) with the default hyper-parameters (Except for AG_news, we relax the similarity threshld from 0.93 to 0.7 when using BAE attack). The SemAttack is implemented by (Wang et al., 2022)
while the generating contextualized embedding space is modified from (Reif et al., 2019). The reported numbers are the average performance over 3 random runs of the experiment for iD setting, and the standard deviation is less than 2%.
## A.9 Attack Success Rate Versus Label Smoothing Factors
As mentioned in Section 3.1, we plot the attack success rate of BAE attack versus the label smoothing factors. Here, we plot the results for the TextFooler and SemAttack in Figure 3 and 4, and observe the same tendency as we discussed above.
We also plot the attack success rate of
![9_image_1.png](9_image_1.png)
![9_image_2.png](9_image_2.png)
BAE/TextFooler attack versus the adversarial label smoothing factors in Figure 5 and 6.
![9_image_4.png](9_image_4.png)
We additionally plot the clean accuracy versus the label smoothing factor in Figure 7, and find out that there is not much drop in clean accuracy with increasing the label smoothing factors.
## A.10 Average Number Of Word Change Versus Confidence
Word change rate is defined as the ratio between the number of word replaced after attack and the
![9_image_0.png](9_image_0.png)
![9_image_3.png](9_image_3.png)
total number of words in the sentence. Here we plot the bucket-wise word change ratio of adversarial attack versus confidence, and observe that the word change rate for high-confident examples are higher for label smoothed models compared to standard models in most cases. This indicates that it is more difficult to attack label smoothed text classification models. Also note that there is the word change rate is zero because there is no clean texts fall into those two bins.
![9_image_5.png](9_image_5.png)
![10_image_0.png](10_image_0.png)
![10_image_1.png](10_image_1.png)
the confidence scores, and plot the bucket-wise attack success rate (of the BAE attack on the Yelp dataset) versus confidence in Figure 10 and Figure 1. We observe that the label smoothing technique improves the adversarial robustness for high confidence score samples significantly. In future work, we plan to investigate the variations of robustness in label-smoothed models as a function of the model size.
![10_image_2.png](10_image_2.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5.
✓ A2. Did you discuss any potential risks of your work?
Section 6.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes. Abstract and section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3, And Appendix A.8.
✓ B1. Did you cite the creators of artifacts you used?
Section 3 and appendix A.8.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A.8
## C ✓ **Did You Run Computational Experiments?** Section 3.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.8 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.1 and Appendix A.8.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3 and Appendix A.8.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3 and Appendix A.3, A.8.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
abaskohi-etal-2023-lm | {LM}-{CPPF}: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning | https://aclanthology.org/2023.acl-short.59 | In recent years, there has been significant progress in developing pre-trained language models for NLP. However, these models often struggle when fine-tuned on small datasets. To address this issue, researchers have proposed various adaptation approaches. Prompt-based tuning is arguably the most common way, especially for larger models. Previous research shows that adding contrastive learning to prompt-based fine-tuning is effective as it helps the model generate embeddings that are more distinguishable between classes, and it can also be more sample-efficient as the model learns from positive and negative examples simultaneously. One of the most important components of contrastive learning is data augmentation, but unlike computer vision, effective data augmentation for NLP is still challenging. This paper proposes LM-CPPF, Contrastive Paraphrasing-guided Prompt-based Fine-tuning of Language Models, which leverages prompt-based few-shot paraphrasing using generative language models, especially large language models such as GPT-3 and OPT-175B, for data augmentation. Our experiments on multiple text classification benchmarks show that this augmentation method outperforms other methods, such as easy data augmentation, back translation, and multiple templates. | # Lm-Cppf: Paraphrasing-Guided Data Augmentation For Contrastive Prompt-Based Few-Shot Fine-Tuning
Amirhossein Abaskohi1, Sascha Rothe2, Yadollah Yaghoobzadeh1,3 1School of Electrical and Computer Engineering College of Engineering, University of Tehran, Tehran, Iran 2Google DeepMind, Zürich, Switzerland 3 Tehran Institute for Advanced Studies, Khatam University, Iran [email protected], [email protected], [email protected]
## Abstract
In recent years, there has been significant progress in developing pre-trained language models for NLP. However, these models often struggle when fine-tuned on small datasets.
To address this issue, researchers have proposed various adaptation approaches. Promptbased tuning is arguably the most common way, especially for larger models. Previous research shows that adding contrastive learning to prompt-based fine-tuning is effective as it helps the model generate embeddings that are more distinguishable between classes, and it can also be more sample-efficient as the model learns from positive and negative examples simultaneously. One of the most important components of contrastive learning is data augmentation, but unlike computer vision, effective data augmentation for NLP is still challenging.
This paper proposes LM-CPPF, Contrastive Paraphrasing-guided Prompt-based Fine-tuning of Language Models, which leverages promptbased few-shot paraphrasing using generative language models, especially large language models such as GPT-3 and OPT-175B, for data augmentation. Our experiments on multiple text classification benchmarks show that this augmentation method outperforms other methods, such as easy data augmentation, back translation, and multiple templates.1
## 1 Introduction
Pre-trained language models (PLMs) are trained on large-scaled corpora in a self-supervised fashion. They have fundamentally changed the NLP
community in the past few years by achieving impressive results in various Tasks (Devlin et al.,
2018; Radford et al., 2018; Yang et al., 2019; Chiang et al., 2022). However, when PLMs are finetuned on small datasets, their performance declines.
Researchers have proposed various techniques to adapt PLMs to these scenarios (Snell et al., 2017; 1Our implementation is publicly available at: https://
github.com/AmirAbaskohi/LM-CPPF
Sung et al., 2018). In addition to performance, fine-tuning PLMs to learn a new task is parameter inefficient, because an entirely new model is required for every task (Houlsby et al., 2019).
By the introduction of GPT-3 (Brown et al.,
2020b) with 175B parameters, it has been shown that Large Language Models (LLMs) are efficient few-shot learners as they can use their knowledge more effectively. One of the key features of these LLMs is their ability to perform multiple tasks using prompts. A language prompt is a piece of text that is added to the input query to help the model make more accurate predictions. In addition, LLMs can be fine-tuned for specific tasks using few examples. This has made them powerful tools for NLP
tasks, especially in few-shot scenarios. However, that might not be practical for many situations because of the model size. Therefore, there is a need to adapt smaller PLMs to work in a similar way to LLMs.
Prompt-based fine-tuning is a method for adapting PLMs to specific tasks or domains by providing a prompt (Schick and Schütze, 2020a,b). This approach has been shown to be effective in various NLP tasks, including text classification (Han et al.,
2021; Wang et al., 2022) and question answering
(Yao et al., 2022). However, it can be challenging to achieve strong performance when only a few examples are available for each task. Gao et al. (2020) introduced a prompt-based fine-tuning method called LM-BFF for RoBERTa (Liu et al.,
2019) to tackle this issue. Their approach includes automated prompt generation and a more effective way of using task examples in fine-tuning.
Building on the success of LM-BFF and considering contrastive learning's promising results both in computer vision (Chen et al., 2020) and NLP
(Chen et al., 2020; Miao et al., 2021), Jian et al.
(2022) present a contrastive learning framework to improve LM-BFF. They propose a Supervised Contrastive Learning (SCL) approach (Khosla et al.,
2020) that classifies inputs using different augmented views of the data. These views are created using different templates for their demonstrations when building prompts.
In this paper, we show that while SCL at the feature space can be beneficial, the use of different templates can limit the full potential of this approach. We propose **LM-CPPF** (Contrastive Paraphrasing-guided Prompt-based Fine-tuning of Language Models), in which we integrate the knowledge of LLMs like GPT-3 and OPT-175B
(Zhang et al., 2022) to build different views using paraphrasing. These models can generate paraphrases of a sentence with different syntax, not just by changing the lexicalization. Previous studies have considered generating paraphrases a challenging and costly NLP task (Siddique et al., 2020; Garg et al., 2021; Zhou and Bhat, 2021). However, PLMs can generate paraphrases easily and effectively using in-context learning with few examples. Although prior research has studied paraphrase generation with PLMs (Roy and Grangier, 2019; Hegde and Patil, 2020), to the best of our knowledge, this is the first time that large LLMs are utilized to generate paraphrases with prompts as an augmentation method. Our experiments on six different text classification tasks demonstrate that LMCPPF outperforms the previous SOTA methods of data augmentation in prompt-based fine-tuning, including Easy Data Augmentation (EDA) (Wei and Zou, 2019), Back Translation (BT) (Sugiyama and Yoshinaga, 2019), and multiple templates (Jian et al., 2022).
## 2 Related Works
LLMs like GPT-3 (Brown et al., 2020a) can perform NLP tasks with few examples and natural prompts. But smaller models are not efficient with this approach and there are data sparsity and prompt sensitivity issues. To address these challenges, Gao et al. (2021) propose LM-BFF, a framework that leverages a large PLM to automatically generate task-specific prompts for smaller models. It improves their few-shot performance on different NLP tasks. Some work have enhanced LM-BFF
with different prompt tuning methods. For example, Zhou et al. (2022) present a dual context-guided continuous prompt tuning method that uses the language context and connects discrete and continuous prompt tuning. Jian et al. (2022) integrate contrastive learning and data augmentation with LM-BFF. In their contrastive part, in addition to comparing different instances from the same or different classes, they introduced a novel promptspecific augmentation method. In their approach, they change the template of the prompt. In this paper, we use few-shot paraphrasing with LLMs for contrastive prompt-tuning, which fine-tunes models with natural prompts.
Paraphrasing is the task of expressing the same meaning with different words or structures. It can be used to create training data with increased diversity and naturalness for NLP tasks, such as text classification (Xie et al., 2020), natural language inference (Kumar et al., 2019), and text summarization (Loem et al., 2022), surpassing the limitations of traditional approaches. Paraphrasing helps with data scarcity and model generalization. There are different ways to generate paraphrases for data augmentation. One is back-translation (Sennrich et al.,
2016), which uses a translation system to convert a sentence to another language and back. Another is to use paraphrasing models trained on parallel paraphrase datasets (Wieting and Gimpel, 2018; Zhu et al., 2022). PLMs can also generate paraphrases by using large-scale corpora, but they may produce paraphrases that are not semantically consistent or relevant. LLMs can reduce this problem as they encode and generate language better. In this paper, we generate paraphrases by carefully prompting LLMs and then use them for data augmentation.
## 3 Method
Background Contrastive learning's success relies on data augmentation, which creates new views of the input data. Contrastive learning has been utilized for various tasks in deep learning (Le-Khac et al., 2020; Conde and Turgutlu, 2021; Abaskohi et al., 2022); however, most NLP data augmentation methods may influence semantics which results in limited improvement. For instance, EDA's synonym substitution may create entirely new samples since words do not have equal senses (Keselj, 2009). In addition to these augmentation methods, the approach used in Jian et al. (2022) cannot be counted as data augmentation as the sample is still the same and only the template for the verbalizer changes. Although it is a creative approach designed specifically for the prompt-based method of LM-BFF, it is limited in performance even compared to EDA in several benchmarks. Furthermore, it requires an expert to create multiple templates
![2_image_0.png](2_image_0.png)
for each task, which makes it challenging for newly emerged tasks. Here we propose leveraging LLMs to generate paraphrases and introduce LM-CPPF,
a novel approach aimed at addressing the challenges associated with contrastive prompt-based fine-tuning of PLMs.
Few-shot paraphrasing Paraphrasing is one of the best methods for data augmentation in NLP.
One of the most popular approaches for paraphrasing is back-translation (BT) (Sugiyama and Yoshinaga, 2019) due to its simplicity and efficiency.
Nonetheless, BT's performance depends a lot on the intermediary language. In this paper, we, instead, use a combination of prompt-learning and LLMs for paraphrasing. In few-shot paraphrasing, an LLM rewrites a sentence given an instruction and a few examples. We believe that LLMs generate high-quality paraphrases due to their encoded semantic and sentence structure knowledge. We utilize GPT-3 (Brown et al., 2020b) or OPT-175B
(Zhang et al., 2022) via their official APIs 2for generating paraphrases.
To avoid violating the prompt-based fine-tuning settings, we do not include any additional task data in generating our paraphrases. Following the fewshot setting in LM-BFF, we assume to have access to a PLM M, datasets D*train*, and D*test* with label space Y where there are only K = 16 examples per class in D*train*. We use this setting for both promptbased few-shot paraphrasing and fine-tuning. To generate paraphrases, excluding the one sample that we want to paraphrase, we use QuillBot3to create paraphrases for our prompts for the remaining 15 samples in the same class of D*train*. We leverage two types of prompts for paraphrasing: (I)
Only Demonstration: Here, the samples and their paraphrased versions are given using the templates in Table C.3 to demonstrate the task of paraphrasing. (II) **Demonstrations with Instruction:** In addition to the previous method, this one includes instructions at the beginning of the prompt, defining paraphrasing before demonstrations. These instructions can be seen in Table C.4.
Contrastive prompt-based fine-tuning LMCPPF consists of two steps. The first step involves calculating the Masked Language Modeling
(MLM) loss by using the target sentence in the given template, the specific demonstrations in the prompt, and the verbalizer matched with the target sentence's label. We calculate the supervised contrastive loss in the second step by comparing the target prompt with another sample with the same template but different random demonstrations. This comparison sample can be in the same or a different class as the target prompt. When the comparison sample belongs to a different class, it is randomly sampled from the dataset. However, in cases where the comparison sample belongs to the same class, an alternative approach is employed. This involves either selecting another sample from the same class 2OPT-175B: opt.alpa.ai and GPT-3: openai.com/api
| Task | LM-BFF | LM-BFF+ | LM-BFF+ | LM-CPPF | LM-CPPF | LM-CPPF | LM-CPPF |
|------------|-----------------|-----------|-----------|-----------|-----------|-----------|-----------|
| SupConLoss | Multi-templates | GPT-3 | OPT | GPT-2 | FT GPT-2 | | |
| SST-2 | 89.5 | 90.3 | 91.0 | 92.3 | 91.8 | 91.1 | 91.4 |
| SST-5 | 48.5 | 49.6 | 50.3 | 52.8 | 52.2 | 51.4 | 51.6 |
| MNLI | 62.3 | 63.2 | 64.8 | 68.4 | 66.2 | 65.6 | 65.8 |
| CoLA | 6.9 | 9.6 | 11.6 | 14.1 | 13.3 | 10.7 | 11.8 |
| QNLI | 61.2 | 65.4 | 67.2 | 69.2 | 68.5 | 67.5 | 67.8 |
| CR | 89.7 | 89.9 | 90.2 | 91.4 | 91.1 | 90.2 | 90.7 |
within the dataset or applying data augmentation techniques, paraphrasing in our case, to augment the target sample in order to create a new view of it.
In both of these cases, the demonstrations are not the same. Figure 1 illustrates the fine-tuning process, and Algorithm D.1 shows our methodology when paraphrasing creates a new view of the target sample. See Appendix D for more information.
## 4 Experiments
Evaluation datasets and protocol Our method is evaluated on six different classification tasks from LM-BFF (Liu et al., 2021). The reported numbers represent the average accuracy from five runs using Roberta-base (Liu et al., 2019). In Section 4.1 where LLMs are compared for paraphrasing, we also employed pre-trained and fine-tuned GPT2 as an additional model for paraphrasing, allowing us to leverage smaller models in our experiments.
For the fine-tuning of GPT-2 specifically for paraphrasing, we utilized the ParaNMT-50M (Wieting and Gimpel, 2018) dataset. More details regarding the training process can be found in Appendix A.
## 4.1 Paraphrasing In Prompt Fine-Tuning
This section presents the results of our fine-tuning approach using paraphrasing on various NLP tasks.
As shown in Table 1, LM-CPPF improves the model's accuracy on all tasks compared to the baseline method of LM-BFF+Multi-templates (Jian et al., 2022). Comparing the standard deviation of our model in five runs and the standard deviations of LM-BFF and LM-BFF + Multi-templates, we see that LM-CPPF has a higher standard deviation as it uses an intermediary model for generating paraphrases. In contrast, LM-BFF + Multitemplates integrates templates that have nearly equal performance (Jian et al., 2022).
We also compare the effect of using GPT-3, OPT175B, and GPT-2 as our language model for fewshot paraphrasing. We did two experiments with GPT-2 large: (I) Using a pre-trained version of GPT-2 where the weights are not tuned at all (II)
Fine-tuned GPT-2 where the model has been finetuned on the ParaNMT-50M dataset. The results in Table 1 indicate that GPT-3 outperforms OPT-175B
in all tasks and GPT-2 has a lower performance, which was predictable since it has significantly fewer parameters. Also, fine-tuned GPT-2 shows a better performance which suggests that GPT-2's knowledge after pre-training is not enough for doing a task like paraphrasing. About the LLMs, although both models have 175B parameters, OPT175B has a 1/7 carbon footprint of GPT-3, and it is also freely available (Zhang et al., 2022). Consequently, we base our further analysis on OPT-175B.
## 4.2 Few-Shot Paraphrasing Vs. Other Data Augmentation Methods
In this section, we present an experimental comparison of the performance of the few-shot paraphrasing approach and other data augmentation methods, including BT and EDA. The results are shown in Table 2. The BT approach is evaluated using different intermediary languages (Arabic, French, Deutsch, Chinese, and Hindi). The results indicate that BT's performance is slightly different across languages, with Chinese showing the highest performance. In general, paraphrasing approaches, including BT, are better in comparison to EDA.
In SST-2 and CR, where the samples are usually simple sentences, BT shows weaker performance
| Task | Few-shot | Back Traslation | SR | RI | RS | RD | EDA | | | | |
|--------------|------------|-------------------|------|------|------|------|-------|------|------|------|------|
| Paraphrasing | AR | FR | DE | ZH | HI | | | | | | |
| SST-2 | 91.8 | 90.8 | 90.6 | 90.4 | 90.7 | 90.3 | 90.5 | 89.5 | 90.8 | 91.3 | 90.4 |
| SST-5 | 52.2 | 49.2 | 49.3 | 49.1 | 49.6 | 48.3 | 47.9 | 49.3 | 49.3 | 48.2 | 48.2 |
| MNLI | 66.2 | 64.3 | 63.1 | 63.8 | 65.4 | 62.2 | 62.9 | 63.2 | 61.7 | 60.2 | 60.3 |
| CoLA | 13.3 | 6.7 | 6.8 | 6.4 | 7.1 | 5.9 | 6.3 | 5.8 | 5.8 | 5.1 | 5.1 |
| QNLI | 68.5 | 66.5 | 66.2 | 65.8 | 66.6 | 64.3 | 66.1 | 65.9 | 66.3 | 65.6 | 63.3 |
| CR | 91.1 | 88.5 | 88.6 | 88.4 | 88.7 | 87.9 | 89.8 | 89.1 | 89.3 | 89.6 | 89.7 |
| Task | Template Number | | | | | |
|--------|-------------------|------|------|------|------|------|
| 1 | 2 | 3 | 4 | 5 | 6 | |
| SST-2 | 91.8 | 91.2 | 91.4 | 89.1 | 92.1 | 92.4 |
| SST-5 | 52.2 | 53.1 | 52.7 | 53.4 | 53.6 | 54.1 |
| MNLI | 66.2 | 65.9 | 66.9 | 66.1 | 66.2 | 66.4 |
| CoLA | 13.3 | 12.7 | 13.2 | 13.8 | 13.4 | 13.6 |
| QNLI | 68.5 | 68.4 | 68.6 | 68.5 | 68.8 | 69.3 |
| CR | 91.1 | 91.2 | 91.3 | 91.5 | 91.7 | 92.2 |
| Task | w/o Instruct | Template Number | | | | |
|--------|----------------|-------------------|------|------|------|------|
| 1 | 2 | 3 | 4 | 5 | | |
| SST-2 | 92.4 | 93.1 | 93 | 92.8 | 93.2 | 92.7 |
| SST-5 | 54.1 | 54.7 | 54.5 | 54.2 | 54.9 | 54.3 |
| MNLI | 66.9 | 67.8 | 67.5 | 67.1 | 68.2 | 67.2 |
| CoLA | 13.6 | 13.1 | 13.2 | 12.6 | 13.3 | 12.8 |
| QNLI | 69.3 | 69.8 | 70.1 | 69.5 | 70.2 | 69.6 |
| CR | 92.2 | 93.1 | 92.8 | 92.6 | 93.3 | 92.4 |
than EDA. We believe the reason is that BT can be more effective for longer sequences because longer sequences usually contain more context and nuanced meaning. Moreover, EDA employs additional knowledge from another PLM in certain actions, such as synonym substitution, similar to BT and few-shot paraphrasing.
The few-shot paraphrasing approach introduced in this work outperforms both BT and EDA. This confirms that using PLM's knowledge properly in paraphrasing is an effective and efficient data augmentation method. In few-shot paraphrasing, we instruct the model to generate paraphrases that differ in lexicalization and sentence structure.
## 4.3 Prompt Template Evaluation
As the heart of our method is the few-shot paraphrase generation done by LLMs, we investigate the impact of different paraphrasing prompt demonstrations and instruction templates on the performance of our model. Table 3 shows that the last template presented in Table C.3 is better in almost all tasks. This template, "<Original Text>, in other words <Paraphrased>", uses a complete and concrete sentence, unlike other templates, which use specific tokens, such as "[Original]", to distinguish between the original and the paraphrased version. Also, we compare different instruction templates presented in Table C.4. As we aimed to report our best result in each task here, we used the best demonstration template for any particular task, which was determined in Table 3. Table 4 shows that the fourth template achieves the best performance, as it precisely describes the task with its instruction "Generate a paraphrase of the following text using different words and sentence structures while still conveying the same meaning".
## 5 Conclusion
Our experiments demonstrated the effectiveness of using few-shot paraphrasing as a data augmentation method for contrastive prompt-based fine-tuning of PLMs. It outperformed other data augmentation methods in text classification tasks, such as EDA, multiple templates, and back translation. We also found that our approach is effective with GPT3 or OPT-175b models in generating paraphrases.
Overall, LM-CPPF improves the performance of LM-BFF by large margins using contrastive learning applied on paraphrases generated by LLMs.
## Limitations
Our approach relies on the performance of the fewshot paraphrasing. This results in two limitations for our approach. One limitation is the difficulty in accessing GPT-3 and OPT-175b models. These models currently need to be more widely available. OPT-175B has a free version but it is very slow. Another limitation is the need for annotated demonstrations for few-shot paraphrasing. While there are available models and tools, like QuillBot, that can be used for this purpose, their quality is not comparable to GPT-3 and OPT-175b. This can limit the power of these tools in our approach. Using human knowledge to paraphrase the demonstration can help these large models generate high-quality paraphrases but it is expensive.
## Ethics Statement
The research conducted in this paper has been carried out in accordance with the ethical principles of ACL. We have ensured that our experiments do not harm any individuals or groups and have obtained informed consent from all participants. As mentioned in the paper, we also tried to base our main experimentation on the more environmentallyfriendly option, OPT-175B.
## References
Amirhossein Abaskohi, Fatemeh Mortazavi, and Hadi Moradi. 2022. Automatic speech recognition for speech assessment of persian preschool children.
arXiv preprint arXiv:2203.12886.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020a. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020b. Language models are few-shot learners.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for
contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR.
Cheng-Han Chiang, Yung-Sung Chuang, and Hung-yi Lee. 2022. Recent advances in pre-trained language models: Why do they work and how do they work. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Tutorial Abstracts, pages 8–15, Taipei. Association for Computational Linguistics.
Marcos V Conde and Kerem Turgutlu. 2021. Clip-art:
Contrastive pre-training for fine-grained art classification. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 3956–3960.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2020.
Making pre-trained language models better few-shot learners. *arXiv preprint arXiv:2012.15723*.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830.
Sonal Garg, Sumanth Prabhu, Hemant Misra, and G Srinivasaraghavan. 2021. Unsupervised contextual paraphrase generation using lexical control and reinforcement learning. *arXiv preprint* arXiv:2103.12777.
Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. Ptr: Prompt tuning with rules for text classification.
Chaitra Hegde and Shrikumar Patil. 2020. Unsupervised paraphrase generation using pre-trained language models. *arXiv preprint arXiv:2006.05477*.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR.
Yiren Jian, Chongyang Gao, and Soroush Vosoughi.
2022. Contrastive learning for prompt-based fewshot language learners. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5577–5587, Seattle, United States. Association for Computational Linguistics.
Vlado Keselj. 2009. Speech and language processing daniel jurafsky and james h. martin (stanford university and university of colorado at boulder) pearson prentice hall, 2009, xxxi+ 988 pp; hardbound, isbn 978-0-13-187321-6.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. Advances in Neural Information Processing Systems, 33:18661–18673.
Alex Krizhevsky. 2014. One weird trick for parallelizing convolutional neural networks. *arXiv preprint* arXiv:1404.5997.
Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 3609–3619, Minneapolis, Minnesota. Association for Computational Linguistics.
Phuc H Le-Khac, Graham Healy, and Alan F Smeaton.
2020. Contrastive representation learning: A framework and review. *Ieee Access*, 8:193907–193934.
Shikun Liu, Shuaifeng Zhi, Edward Johns, and Andrew J Davison. 2021. Bootstrapping semantic segmentation with regional contrast. *arXiv preprint* arXiv:2104.04465.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Mengsay Loem, Sho Takase, Masahiro Kaneko, and Naoaki Okazaki. 2022. ExtraPhrase: Efficient data augmentation for abstractive summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pages 16–24, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.
Deshui Miao, Jiaqi Zhang, Wenbo Xie, Jian Song, Xin Li, Lijuan Jia, and Ning Guo. 2021. Simple contrastive representation adversarial learning for nlp tasks. *arXiv preprint arXiv:2111.13301*.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training.
Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. arXiv preprint arXiv:1905.12752.
Timo Schick and Hinrich Schütze. 2020a. Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:2001.07676.
Timo Schick and Hinrich Schütze. 2020b. It's not just size that matters: Small language models are also few-shot learners. *arXiv preprint arXiv:2009.07118*.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics.
AB Siddique, Samet Oymak, and Vagelis Hristidis.
2020. Unsupervised paraphrasing via deep reinforcement learning. In *Proceedings of the 26th ACM*
SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1800–1809.
Jake Snell, Kevin Swersky, and Richard Zemel. 2017.
Prototypical networks for few-shot learning. *Advances in neural information processing systems*, 30.
Amane Sugiyama and Naoki Yoshinaga. 2019. Data augmentation using back-translation for contextaware neural machine translation. In Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019), pages 35–44.
Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018.
Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1199–1208.
Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2020.
Contrastive multiview coding. In *European conference on computer vision*, pages 776–794. Springer.
Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, and Ming Gao. 2022. Towards unified prompt tuning for few-shot text classification.
Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. *arXiv preprint arXiv:1901.11196*.
John Wieting and Kevin Gimpel. 2018. ParaNMT-50M:
Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Melbourne, Australia. Association for Computational Linguistics.
Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. 2020. Unsupervised data augmentation for consistency training.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information processing systems*, 32.
Yuan Yao, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, and Jianyong Wang. 2022. Prompt tuning for discriminative pre-trained language models. arXiv preprint arXiv:2205.11166.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
arXiv preprint arXiv:2205.01068.
Jianing Zhou and Suma Bhat. 2021. Paraphrase generation: A survey of the state of the art. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5075–5086.
Jie Zhou, Le Tian, Houjin Yu, Zhou Xiao, Hui Su, and Jie Zhou. 2022. Dual context-guided continuous prompt tuning for few-shot learning. In Findings of the Association for Computational Linguistics: ACL
2022, pages 79–84, Dublin, Ireland. Association for Computational Linguistics.
Hongyu Zhu, Yan Chen, Jing Yan, Jing Liu, Yu Hong, Ying Chen, Hua Wu, and Haifeng Wang. 2022.
DuQM: A Chinese dataset of linguistically perturbed natural questions for evaluating the robustness of question matching models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7782–7794, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
## A Evaluation Setting B Task Prompts C Paraphrasing Prompts
| Task | Batch Size | Learning Rate −7 |
|--------|--------------|--------------------|
| SST-2 | 8 | 7e |
| SST-5 | 20 | 7e −6 |
| MNLI | 12 | 7e −6 −6 |
| CoLA | 8 | 7e |
| QNLI | 8 | 7e −6 |
| CR | 16 | 7e −6 |
We show the batch size and learning rate for SupCon in Table A.1. It is important to note that the results of LM-BFF presented in the main paper were obtained using the same large batch size as our method to ensure fair comparisons.
We fine-tuned with a batch size that fits into GPU
memory and is divisible by the total number of examples in the task. Experiments were conducted on one NVIDIA RTX-3090 with 24 GB memory using the RoBERTa-base model. Furthermore, as per LM-BFF, we fine-tuned for a maximum of 1000 steps.
Table A.1: Batch size and learning rate for SupCon loss used for each task.
For the GPT-2 experiments in Table 1, we followed the same intructions for generating paraphrases as we used for GPT-3 and OPT-175. In fine-tuning GPT-2, we fine-tuned our model on ParaNMT-50M (Wieting and Gimpel, 2018) with the batch size of 32 and learning rate of 1e
−3for 5 epochs.
We used a learning rate of 1e
−5for MLM loss like LM-BFF. Although contrastive learning algorithms often perform better with larger batch training, due to resource limitations, we had to use half the batch size suggested in Jian et al. (2022) for various tasks in the SCL phase. As recommended in Krizhevsky
(2014), we used *sqrt*(0.5) ≈ 0.7 of the learning rates mentioned in Jian et al. (2022) for this phase.
Therefore, we report baselines with our smaller batch size. Our method uses a single template for each task's prediction. The primary prompts are listed in Appendix B. For the prompts used in the paraphrasing phase, with the exception of experiments in Section 4.3, we used randomly selected templates from the suggested prompts listed in Table C.3. In all of the experiments, we used OPT-175B, except one of the results mentioned in Section 4.1, where we compared OPT-175B and GPT-3 in paraphrasing.
The primary prompts utilized for each task in our experiments are displayed in Table B.2. They were handpicked by LM-BFF (Gao et al., 2021).
To find the best prompt for paraphrasing, we checked different corpus available online and found out how the paraphrasing examples are introduced.
We generated our prompts by using this information and our manual modification in these templates.
In this demonstration prompt, we did not provide any explanations or descriptions for the specific transformation applied to the input to produce the output. Instead, we labeled the original sample and its paraphrase. For instance, we used the token
[Original] to indicate the original sentence in the dataset and the token **[Paraphrase]** to indicate the
| Task | Template | Verbalizers |
|--------|----------------------|---------------------------------------------------------------------------------------|
| SST-2 | <S1>It was [MASK] . | positive: great, negative: terrible |
| SST-5 | <S1>It was [MASK] . | v.positive: great, positive: good, neutral: okay, negative: bad, v.negative: terrible |
| MNLI | <S1>? [MASK] , <S2> | entailment: Yes, netural: Maybe, contradiction: No |
| CoLA | <S1>This is [MASK] . | grammatical: correct, not_grammatical: incorrect |
| QNLI | <S1>? [MASK] , <S2> | entailment: Yes, not_entailment: No |
| CR | <S1>It was [MASK] . | positive: great, negative: terrible |
Table B.2: Primary templates and verbalizers (label words) used in our experiments.
| paraphrased sample. Table C.3 shows the templates we used for this approach. | Instructions |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------|
| Summarize the following text in your own words Rewrite the following text that expresses the same idea in a different way Generate a paraphrase of the following text that expresses the same ideas in a different way Generate a paraphrase of the following text using different words and sentence structures while still conveying the same meaning Generate a summary or paraphrase of the following text that captures the essence of the ideas in a concise manner | |
| Demonstration Template Original:<Original Text> | |
| Paraphrase:<Paraphrased Text> [Original]:<Original Text> [Paraphrase]:<Paraphrased Text> Original:<Original Text> Rewrite:<Paraphrased Text> [Original]:<Original Text> [Rewrite]:<Paraphrased Text> | |
| Here is the original source: <Original Text> Here is the paraphrase: <Paraphrased Text> | Table C.4: The instructions that were used before giving |
Table C.3: The templates that were used to give examples of how the paraphrasing should be done to the pre-trained language model.
In instruction for prompts, we provided examples and simple instructions to the language models.
The instructions were used to ask the model to generate paraphrases before presenting them with examples. Table C.4 shows the instructions we used to explain the task to the model at the beginning of our prompts.
## D Contrastive Prompt-Based Fine-Tuning Details
Contrastive prompt-based fine-tuning contains two main steps: (1) Masked Language Modeling and
(2) Contrastive Learning.
Masked Language Modeling (MLM) Loss. A
classification task is approached as a Masked Language Modeling(MLM) problem in prompt-based methods. The input consists of a sentence (sent)
and a template with a mask (temp) (i.e., x*prompt* =
sent, temp([MASK])), and the goal is to determine the best token to fill in the [MASK]. This results in a MLM loss, represented as LMLM =
MLM(xprompt, y), where y is the word label as-
Table C.4: The instructions that were used before giving examples to the language model to describe the paraphrasing task.
sociated with x*prompt*. LM-BFF (Gao et al., 2021)
uses demonstrations of label words to improve the results. The input for this approach includes the sentence (*sent*0) and the masked template (*temp*0)
with a mask ([MASK]. The input also contains an additional sentence (*sent*i) with the same template
(*temp*0) with its own verbalizer (*word*i) for those sentences. The label words are sampled from the training set. The classification loss is then calculated using this input.
The language model first encodes the input sentence xin into a sequence of tokens, which are then mapped to a sequence of hidden states h1, h2*, ..., h*L. L denotes the length of the sequence, and the dimension of the hidden states is denoted by d. For example, in promptbased fine-tuning, if the input sentence (xin) is
"France missed the world cup in penalties," the corresponding prompt x*prompt* would be [CLS] xin, [MASK].[SEP]. The model then determines whether it is more likely to place the appropriate verbalizer at the [MASK] position. It has been found that fine-tuning with this fill-in-the-blank framework is superior to standard fine-tuning. The prediction of the model M for a class y ∈ Y can be expressed by mapping the label space Y to the
Algorithm D.1 Learning from MLM and SupCon with Paraphrasing 1: **Input:**
2: Training set: D*train* 3: MLM model: M
4: Function to concatenate two strings: *Concat* 5: Cross Entropy loss: CE 6: Supervised Contrastive loss: *SupCon* 7: Paraphrase function: *P araphrase* 8: Function that samples from a dataset and puts it in the specific template: *Sample* 9: // The third parameter of this function specifies 10: // whether to pus [MASK]or the verbalizer of 11: // the label 12: Template For Prompts: *T emplate* 13: *M axStep* = 1000 14: **Preparing Samples:**
15: for i < MaxStep do 16: *sent, y*=Sample(Dtrain, *T emplate*, false)
17: *demo*1=Sample(Dtrain, *T emplate*, true)
18: *demo*2=Sample(Dtrain, *T emplate*, true)
19: *demo*3=Sample(Dtrain, *T emplate*, true)
20: *demo*4=Sample(Dtrain, *T emplate*, true)
21: *demo*in1 =Concat(demo1, *demo*2,) 22: *demo*in2 =Concat(demo3, *demo*4,)
23: xin1 =Concat(T (sent), T (*demo*in1
))
24: xin2 =Concat(T (Par(sent)), T (*demo*in2
))
25: ▷ **MLM Learning:**
26: *output*1 = M(xin1
)
27: LMLM = CE(output1, y) 28: LMLM.backward()
29: optimizer.step()
30: ▷ **Contrastive Learning:**
31: *output*2 = M(xin2
)
32: L*SupCon* = SupCon(output1*, output*2) 33: L*SupCon*.backward()
34: optimizer.step() 35: **end for**
label words, where V(y) represents the label word for class y. This can be written as:
$$p(y|x_{i n})=p([M A S K]={\mathcal{V}}(y)|x_{i n})\tag{1}$$ $$=\frac{e x p(w_{{\mathcal{V}}(y)}.h_{[M A S K]})}{\sum_{y^{\prime}\in{\mathcal{V}}}e x p(w_{{\mathcal{V}}(y^{\prime})}.h_{[M A S K]})}$$
where the weight vector of the MLM head is denoted by w.
In LM-BFF, the authors add demonstrations to the input x*prompt* to improve the model's understanding of verbalizers. As a result, the input to
## Lm-Bff Is In The Following Form:
$${\mathcal{T}}(x_{i n})\oplus{\mathcal{T}}(x_{i n}^{1},y^{1})\oplus\ldots\oplus{\mathcal{T}}(x_{i n}^{k},y^{k})$$
where T (x i in, yi) illustrates the i-th demonstration in the template *mathcalT* with where the actual verbalizer of the samples replaces the [MASK].
Also, k is the number of demonstrations we want to use in our prompts. This paper uses random sampling to select demonstrations from the training set.
The MLM loss is calculated as follows:
$$\mathcal{L}_{MLM}=\sum_{(x_{in},y)\in\mathcal{D}_{train}}-log[p(y|x_{in})]\tag{3}$$
Supervised Contrastive Loss. Supervised Contrastive Learning is a specific form of contrastive learning (Chen et al., 2020; Tian et al., 2020; Liu et al., 2021) that clusters two augmented batches at the class level in the feature space and calculates the contrastive loss using Equation 4:
$${\mathcal{L}}_{S u p C o n}=(x_{1}^{\prime},x_{2}^{\prime},y)$$
$$(4)$$
, y) (4)
where x
′ 1 and x
′ 2 are the augmented version of the input batch x and y is the actual label of the batch.
To use SupCon on multiple views of an input text, we first need to obtain two views of the text:
$\square$
xin1 = T (sent) ⊕ T (demo1) ⊕ T (*demo*2) (5)
xin2 = T (P ar(sent)) ⊕ T (demo3) ⊕ T (demo4)
(6)
where xin1 is the same as xprompt+*demo* in LMBFF and T is a function that formats the sentence according to a specific template. Instead of using a new template in which the newly generated sample does not provide a new perspective, we use the few-shot paraphrasing (*P ar*) function. Also, *verb* stands for the verbalizer used for the actual label of the sample. Now using Equation 4 on two views, we can calculate the total loss:
$${\mathcal{L}}_{T o t a l}={\mathcal{L}}_{S u p C o n}+{\mathcal{L}}_{M L M}$$
$$\left(T\right)$$
Algorithm D.1 shows an overview of our method which uses contrastive few-shot fine-tuning with few-shot paraphrasing. It is important to mention that learning from L*SupCon* requires one additional forward and backward pass, which increases the computational cost by a factor of 1.5. However, the cost is still the same as Jian et al. (2022)'s model due to the O(1) time complexity of the P araphrase function. Figure 1 shows the finetuning procedure for one prompt sample and its new view created using few-shot paraphrasing. |